00:00:00.001 Started by upstream project "autotest-per-patch" build number 122814 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.055 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.135 Using shallow fetch with depth 1 00:00:00.135 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.135 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.978 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.990 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.003 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.003 > git config core.sparsecheckout # timeout=10 00:00:05.014 > git read-tree -mu HEAD # timeout=10 00:00:05.030 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.049 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.049 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.155 [Pipeline] Start of Pipeline 00:00:05.189 [Pipeline] library 00:00:05.195 Loading library shm_lib@master 00:00:05.195 Library shm_lib@master is cached. Copying from home. 00:00:05.209 [Pipeline] node 00:00:05.220 Running on GP11 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.221 [Pipeline] { 00:00:05.230 [Pipeline] catchError 00:00:05.231 [Pipeline] { 00:00:05.240 [Pipeline] wrap 00:00:05.247 [Pipeline] { 00:00:05.252 [Pipeline] stage 00:00:05.253 [Pipeline] { (Prologue) 00:00:05.409 [Pipeline] sh 00:00:05.689 + logger -p user.info -t JENKINS-CI 00:00:05.710 [Pipeline] echo 00:00:05.711 Node: GP11 00:00:05.721 [Pipeline] sh 00:00:06.018 [Pipeline] setCustomBuildProperty 00:00:06.029 [Pipeline] echo 00:00:06.031 Cleanup processes 00:00:06.036 [Pipeline] sh 00:00:06.316 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.316 312386 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.330 [Pipeline] sh 00:00:06.612 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.612 ++ grep -v 'sudo pgrep' 00:00:06.612 ++ awk '{print $1}' 00:00:06.612 + sudo kill -9 00:00:06.612 + true 00:00:06.627 [Pipeline] cleanWs 00:00:06.638 [WS-CLEANUP] Deleting project workspace... 00:00:06.638 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.645 [WS-CLEANUP] done 00:00:06.650 [Pipeline] setCustomBuildProperty 00:00:06.665 [Pipeline] sh 00:00:06.945 + sudo git config --global --replace-all safe.directory '*' 00:00:07.017 [Pipeline] nodesByLabel 00:00:07.019 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.028 [Pipeline] httpRequest 00:00:07.032 HttpMethod: GET 00:00:07.033 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.036 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.040 Response Code: HTTP/1.1 200 OK 00:00:07.041 Success: Status code 200 is in the accepted range: 200,404 00:00:07.042 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.912 [Pipeline] sh 00:00:08.198 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.217 [Pipeline] httpRequest 00:00:08.222 HttpMethod: GET 00:00:08.222 URL: http://10.211.164.101/packages/spdk_2260a96a90c2c6e612611d80ba821fa6dac56480.tar.gz 00:00:08.223 Sending request to url: http://10.211.164.101/packages/spdk_2260a96a90c2c6e612611d80ba821fa6dac56480.tar.gz 00:00:08.234 Response Code: HTTP/1.1 200 OK 00:00:08.235 Success: Status code 200 is in the accepted range: 200,404 00:00:08.236 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2260a96a90c2c6e612611d80ba821fa6dac56480.tar.gz 00:00:30.469 [Pipeline] sh 00:00:30.749 + tar --no-same-owner -xf spdk_2260a96a90c2c6e612611d80ba821fa6dac56480.tar.gz 00:00:33.296 [Pipeline] sh 00:00:33.578 + git -C spdk log --oneline -n5 00:00:33.578 2260a96a9 nvmf: Add mDNS Pull Registration Request to CHANGELOG.md 00:00:33.578 dc2eace61 nvmf: Add test for mDNS Pull Registration Request 00:00:33.578 48352fbb8 nvmf: Add API for stopping mDNS Pull Registration Request 00:00:33.578 e2d543802 nvmf: Add support for mDNS Pull Registration Requests (TP-8009, TP-8010a, and TP-8024) 00:00:33.578 985ef53a7 test/mock: introduce DEFINE_WRAPPER_MOCK() macro 00:00:33.590 [Pipeline] } 00:00:33.607 [Pipeline] // stage 00:00:33.617 [Pipeline] stage 00:00:33.619 [Pipeline] { (Prepare) 00:00:33.636 [Pipeline] writeFile 00:00:33.654 [Pipeline] sh 00:00:33.933 + logger -p user.info -t JENKINS-CI 00:00:33.945 [Pipeline] sh 00:00:34.226 + logger -p user.info -t JENKINS-CI 00:00:34.236 [Pipeline] sh 00:00:34.514 + cat autorun-spdk.conf 00:00:34.514 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.514 SPDK_TEST_NVMF=1 00:00:34.514 SPDK_TEST_NVME_CLI=1 00:00:34.514 SPDK_TEST_NVMF_NICS=mlx5 00:00:34.514 SPDK_RUN_UBSAN=1 00:00:34.514 NET_TYPE=phy 00:00:34.522 RUN_NIGHTLY=0 00:00:34.527 [Pipeline] readFile 00:00:34.551 [Pipeline] withEnv 00:00:34.552 [Pipeline] { 00:00:34.564 [Pipeline] sh 00:00:34.861 + set -ex 00:00:34.861 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:34.861 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:34.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.861 ++ SPDK_TEST_NVMF=1 00:00:34.861 ++ SPDK_TEST_NVME_CLI=1 00:00:34.861 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:34.861 ++ SPDK_RUN_UBSAN=1 00:00:34.861 ++ NET_TYPE=phy 00:00:34.861 ++ RUN_NIGHTLY=0 00:00:34.861 + case $SPDK_TEST_NVMF_NICS in 00:00:34.861 + DRIVERS=mlx5_ib 00:00:34.861 + [[ -n mlx5_ib ]] 00:00:34.861 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:34.861 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:34.861 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:34.861 rmmod: ERROR: Module irdma is not currently loaded 00:00:34.861 rmmod: ERROR: Module i40iw is not currently loaded 00:00:34.861 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:34.861 + true 00:00:34.861 + for D in $DRIVERS 00:00:34.861 + sudo modprobe mlx5_ib 00:00:34.861 + exit 0 00:00:34.871 [Pipeline] } 00:00:34.891 [Pipeline] // withEnv 00:00:34.896 [Pipeline] } 00:00:34.913 [Pipeline] // stage 00:00:34.923 [Pipeline] catchError 00:00:34.925 [Pipeline] { 00:00:34.941 [Pipeline] timeout 00:00:34.941 Timeout set to expire in 40 min 00:00:34.943 [Pipeline] { 00:00:34.958 [Pipeline] stage 00:00:34.961 [Pipeline] { (Tests) 00:00:34.978 [Pipeline] sh 00:00:35.258 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:35.258 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:35.258 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:35.258 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:35.258 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:35.258 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:35.258 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:35.258 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:35.258 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:35.258 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:35.258 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:35.258 + source /etc/os-release 00:00:35.258 ++ NAME='Fedora Linux' 00:00:35.258 ++ VERSION='38 (Cloud Edition)' 00:00:35.258 ++ ID=fedora 00:00:35.258 ++ VERSION_ID=38 00:00:35.258 ++ VERSION_CODENAME= 00:00:35.258 ++ PLATFORM_ID=platform:f38 00:00:35.258 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:35.258 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:35.258 ++ LOGO=fedora-logo-icon 00:00:35.258 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:35.258 ++ HOME_URL=https://fedoraproject.org/ 00:00:35.258 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:35.258 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:35.258 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:35.258 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:35.258 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:35.258 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:35.258 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:35.258 ++ SUPPORT_END=2024-05-14 00:00:35.258 ++ VARIANT='Cloud Edition' 00:00:35.258 ++ VARIANT_ID=cloud 00:00:35.258 + uname -a 00:00:35.258 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:35.258 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:36.633 Hugepages 00:00:36.633 node hugesize free / total 00:00:36.633 node0 1048576kB 0 / 0 00:00:36.633 node0 2048kB 0 / 0 00:00:36.633 node1 1048576kB 0 / 0 00:00:36.633 node1 2048kB 0 / 0 00:00:36.633 00:00:36.633 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.633 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:36.633 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:36.634 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:36.634 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:36.634 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.634 + rm -f /tmp/spdk-ld-path 00:00:36.634 + source autorun-spdk.conf 00:00:36.634 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.634 ++ SPDK_TEST_NVMF=1 00:00:36.634 ++ SPDK_TEST_NVME_CLI=1 00:00:36.634 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:36.634 ++ SPDK_RUN_UBSAN=1 00:00:36.634 ++ NET_TYPE=phy 00:00:36.634 ++ RUN_NIGHTLY=0 00:00:36.634 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.634 + [[ -n '' ]] 00:00:36.634 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:36.634 + for M in /var/spdk/build-*-manifest.txt 00:00:36.634 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.634 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:36.634 + for M in /var/spdk/build-*-manifest.txt 00:00:36.634 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.634 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:36.634 ++ uname 00:00:36.634 + [[ Linux == \L\i\n\u\x ]] 00:00:36.634 + sudo dmesg -T 00:00:36.634 + sudo dmesg --clear 00:00:36.634 + dmesg_pid=313160 00:00:36.634 + [[ Fedora Linux == FreeBSD ]] 00:00:36.634 + sudo dmesg -Tw 00:00:36.634 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.634 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.634 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.634 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:36.634 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:36.634 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.634 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.634 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.634 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.634 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.634 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.634 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.634 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.634 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.634 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.634 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.634 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:36.634 Test configuration: 00:00:36.634 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.634 SPDK_TEST_NVMF=1 00:00:36.634 SPDK_TEST_NVME_CLI=1 00:00:36.634 SPDK_TEST_NVMF_NICS=mlx5 00:00:36.634 SPDK_RUN_UBSAN=1 00:00:36.634 NET_TYPE=phy 00:00:36.634 RUN_NIGHTLY=0 23:47:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:36.634 23:47:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:36.634 23:47:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:36.634 23:47:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:36.634 23:47:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.634 23:47:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.634 23:47:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.634 23:47:05 -- paths/export.sh@5 -- $ export PATH 00:00:36.634 23:47:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.634 23:47:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:36.634 23:47:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:36.634 23:47:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715723225.XXXXXX 00:00:36.634 23:47:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715723225.Vg4hRS 00:00:36.634 23:47:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:36.634 23:47:05 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:36.634 23:47:05 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:36.634 23:47:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:36.634 23:47:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:36.634 23:47:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:36.634 23:47:05 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:36.634 23:47:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.634 23:47:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:36.634 23:47:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:36.634 23:47:05 -- pm/common@17 -- $ local monitor 00:00:36.634 23:47:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.634 23:47:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.634 23:47:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.634 23:47:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.634 23:47:05 -- pm/common@21 -- $ date +%s 00:00:36.634 23:47:05 -- pm/common@21 -- $ date +%s 00:00:36.634 23:47:05 -- pm/common@25 -- $ sleep 1 00:00:36.634 23:47:05 -- pm/common@21 -- $ date +%s 00:00:36.634 23:47:05 -- pm/common@21 -- $ date +%s 00:00:36.634 23:47:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715723225 00:00:36.634 23:47:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715723225 00:00:36.634 23:47:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715723225 00:00:36.634 23:47:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715723225 00:00:36.634 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715723225_collect-vmstat.pm.log 00:00:36.634 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715723225_collect-cpu-load.pm.log 00:00:36.634 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715723225_collect-cpu-temp.pm.log 00:00:36.634 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715723225_collect-bmc-pm.bmc.pm.log 00:00:37.574 23:47:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:37.574 23:47:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:37.574 23:47:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:37.574 23:47:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:37.574 23:47:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:37.574 Tue May 14 09:47:06 PM UTC 2024 00:00:37.574 23:47:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:37.574 v24.05-pre-647-g2260a96a9 00:00:37.574 23:47:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:37.574 23:47:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:37.574 23:47:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:37.574 23:47:06 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:37.574 23:47:06 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:37.574 23:47:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.574 ************************************ 00:00:37.574 START TEST ubsan 00:00:37.574 ************************************ 00:00:37.574 23:47:06 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:37.574 using ubsan 00:00:37.574 00:00:37.574 real 0m0.000s 00:00:37.574 user 0m0.000s 00:00:37.574 sys 0m0.000s 00:00:37.574 23:47:06 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:37.574 23:47:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:37.574 ************************************ 00:00:37.574 END TEST ubsan 00:00:37.574 ************************************ 00:00:37.833 23:47:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:37.833 23:47:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:37.833 23:47:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:37.833 23:47:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:37.833 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:37.833 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:38.092 Using 'verbs' RDMA provider 00:00:48.643 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.620 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.620 Creating mk/config.mk...done. 00:00:58.620 Creating mk/cc.flags.mk...done. 00:00:58.620 Type 'make' to build. 00:00:58.620 23:47:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:58.620 23:47:27 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:58.620 23:47:27 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:58.620 23:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.620 ************************************ 00:00:58.620 START TEST make 00:00:58.620 ************************************ 00:00:58.620 23:47:27 make -- common/autotest_common.sh@1121 -- $ make -j48 00:00:58.620 make[1]: Nothing to be done for 'all'. 00:01:06.790 The Meson build system 00:01:06.790 Version: 1.3.1 00:01:06.790 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:06.790 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:06.790 Build type: native build 00:01:06.790 Program cat found: YES (/usr/bin/cat) 00:01:06.790 Project name: DPDK 00:01:06.790 Project version: 23.11.0 00:01:06.790 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.790 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.790 Host machine cpu family: x86_64 00:01:06.790 Host machine cpu: x86_64 00:01:06.790 Message: ## Building in Developer Mode ## 00:01:06.790 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.790 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.790 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.790 Program python3 found: YES (/usr/bin/python3) 00:01:06.790 Program cat found: YES (/usr/bin/cat) 00:01:06.790 Compiler for C supports arguments -march=native: YES 00:01:06.790 Checking for size of "void *" : 8 00:01:06.790 Checking for size of "void *" : 8 (cached) 00:01:06.790 Library m found: YES 00:01:06.790 Library numa found: YES 00:01:06.790 Has header "numaif.h" : YES 00:01:06.790 Library fdt found: NO 00:01:06.790 Library execinfo found: NO 00:01:06.790 Has header "execinfo.h" : YES 00:01:06.790 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.790 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.790 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.790 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.790 Run-time dependency openssl found: YES 3.0.9 00:01:06.790 Run-time dependency libpcap found: YES 1.10.4 00:01:06.790 Has header "pcap.h" with dependency libpcap: YES 00:01:06.790 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.790 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.790 Compiler for C supports arguments -Wformat: YES 00:01:06.790 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.790 Compiler for C supports arguments -Wformat-security: NO 00:01:06.790 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.790 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.790 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.790 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.790 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.790 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.790 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.790 Compiler for C supports arguments -Wundef: YES 00:01:06.790 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.790 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.790 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.790 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.790 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.790 Program objdump found: YES (/usr/bin/objdump) 00:01:06.790 Compiler for C supports arguments -mavx512f: YES 00:01:06.790 Checking if "AVX512 checking" compiles: YES 00:01:06.790 Fetching value of define "__SSE4_2__" : 1 00:01:06.790 Fetching value of define "__AES__" : 1 00:01:06.790 Fetching value of define "__AVX__" : 1 00:01:06.790 Fetching value of define "__AVX2__" : (undefined) 00:01:06.790 Fetching value of define "__AVX512BW__" : (undefined) 00:01:06.790 Fetching value of define "__AVX512CD__" : (undefined) 00:01:06.790 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:06.790 Fetching value of define "__AVX512F__" : (undefined) 00:01:06.790 Fetching value of define "__AVX512VL__" : (undefined) 00:01:06.790 Fetching value of define "__PCLMUL__" : 1 00:01:06.790 Fetching value of define "__RDRND__" : 1 00:01:06.790 Fetching value of define "__RDSEED__" : (undefined) 00:01:06.790 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.790 Fetching value of define "__znver1__" : (undefined) 00:01:06.790 Fetching value of define "__znver2__" : (undefined) 00:01:06.790 Fetching value of define "__znver3__" : (undefined) 00:01:06.790 Fetching value of define "__znver4__" : (undefined) 00:01:06.790 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.790 Message: lib/log: Defining dependency "log" 00:01:06.790 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.790 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.790 Checking for function "getentropy" : NO 00:01:06.790 Message: lib/eal: Defining dependency "eal" 00:01:06.790 Message: lib/ring: Defining dependency "ring" 00:01:06.790 Message: lib/rcu: Defining dependency "rcu" 00:01:06.790 Message: lib/mempool: Defining dependency "mempool" 00:01:06.790 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.790 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.790 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.790 Compiler for C supports arguments -mpclmul: YES 00:01:06.790 Compiler for C supports arguments -maes: YES 00:01:06.790 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.790 Compiler for C supports arguments -mavx512bw: YES 00:01:06.790 Compiler for C supports arguments -mavx512dq: YES 00:01:06.790 Compiler for C supports arguments -mavx512vl: YES 00:01:06.790 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.790 Compiler for C supports arguments -mavx2: YES 00:01:06.790 Compiler for C supports arguments -mavx: YES 00:01:06.790 Message: lib/net: Defining dependency "net" 00:01:06.790 Message: lib/meter: Defining dependency "meter" 00:01:06.790 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.790 Message: lib/pci: Defining dependency "pci" 00:01:06.790 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.790 Message: lib/hash: Defining dependency "hash" 00:01:06.790 Message: lib/timer: Defining dependency "timer" 00:01:06.790 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.790 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.790 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.790 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.790 Message: lib/power: Defining dependency "power" 00:01:06.790 Message: lib/reorder: Defining dependency "reorder" 00:01:06.790 Message: lib/security: Defining dependency "security" 00:01:06.790 Has header "linux/userfaultfd.h" : YES 00:01:06.790 Has header "linux/vduse.h" : YES 00:01:06.790 Message: lib/vhost: Defining dependency "vhost" 00:01:06.790 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.790 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.790 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.790 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.790 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.790 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.791 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.791 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.791 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.791 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.791 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.791 Configuring doxy-api-html.conf using configuration 00:01:06.791 Configuring doxy-api-man.conf using configuration 00:01:06.791 Program mandb found: YES (/usr/bin/mandb) 00:01:06.791 Program sphinx-build found: NO 00:01:06.791 Configuring rte_build_config.h using configuration 00:01:06.791 Message: 00:01:06.791 ================= 00:01:06.791 Applications Enabled 00:01:06.791 ================= 00:01:06.791 00:01:06.791 apps: 00:01:06.791 00:01:06.791 00:01:06.791 Message: 00:01:06.791 ================= 00:01:06.791 Libraries Enabled 00:01:06.791 ================= 00:01:06.791 00:01:06.791 libs: 00:01:06.791 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.791 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.791 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.791 00:01:06.791 Message: 00:01:06.791 =============== 00:01:06.791 Drivers Enabled 00:01:06.791 =============== 00:01:06.791 00:01:06.791 common: 00:01:06.791 00:01:06.791 bus: 00:01:06.791 pci, vdev, 00:01:06.791 mempool: 00:01:06.791 ring, 00:01:06.791 dma: 00:01:06.791 00:01:06.791 net: 00:01:06.791 00:01:06.791 crypto: 00:01:06.791 00:01:06.791 compress: 00:01:06.791 00:01:06.791 vdpa: 00:01:06.791 00:01:06.791 00:01:06.791 Message: 00:01:06.791 ================= 00:01:06.791 Content Skipped 00:01:06.791 ================= 00:01:06.791 00:01:06.791 apps: 00:01:06.791 dumpcap: explicitly disabled via build config 00:01:06.791 graph: explicitly disabled via build config 00:01:06.791 pdump: explicitly disabled via build config 00:01:06.791 proc-info: explicitly disabled via build config 00:01:06.791 test-acl: explicitly disabled via build config 00:01:06.791 test-bbdev: explicitly disabled via build config 00:01:06.791 test-cmdline: explicitly disabled via build config 00:01:06.791 test-compress-perf: explicitly disabled via build config 00:01:06.791 test-crypto-perf: explicitly disabled via build config 00:01:06.791 test-dma-perf: explicitly disabled via build config 00:01:06.791 test-eventdev: explicitly disabled via build config 00:01:06.791 test-fib: explicitly disabled via build config 00:01:06.791 test-flow-perf: explicitly disabled via build config 00:01:06.791 test-gpudev: explicitly disabled via build config 00:01:06.791 test-mldev: explicitly disabled via build config 00:01:06.791 test-pipeline: explicitly disabled via build config 00:01:06.791 test-pmd: explicitly disabled via build config 00:01:06.791 test-regex: explicitly disabled via build config 00:01:06.791 test-sad: explicitly disabled via build config 00:01:06.791 test-security-perf: explicitly disabled via build config 00:01:06.791 00:01:06.791 libs: 00:01:06.791 metrics: explicitly disabled via build config 00:01:06.791 acl: explicitly disabled via build config 00:01:06.791 bbdev: explicitly disabled via build config 00:01:06.791 bitratestats: explicitly disabled via build config 00:01:06.791 bpf: explicitly disabled via build config 00:01:06.791 cfgfile: explicitly disabled via build config 00:01:06.791 distributor: explicitly disabled via build config 00:01:06.791 efd: explicitly disabled via build config 00:01:06.791 eventdev: explicitly disabled via build config 00:01:06.791 dispatcher: explicitly disabled via build config 00:01:06.791 gpudev: explicitly disabled via build config 00:01:06.791 gro: explicitly disabled via build config 00:01:06.791 gso: explicitly disabled via build config 00:01:06.791 ip_frag: explicitly disabled via build config 00:01:06.791 jobstats: explicitly disabled via build config 00:01:06.791 latencystats: explicitly disabled via build config 00:01:06.791 lpm: explicitly disabled via build config 00:01:06.791 member: explicitly disabled via build config 00:01:06.791 pcapng: explicitly disabled via build config 00:01:06.791 rawdev: explicitly disabled via build config 00:01:06.791 regexdev: explicitly disabled via build config 00:01:06.791 mldev: explicitly disabled via build config 00:01:06.791 rib: explicitly disabled via build config 00:01:06.791 sched: explicitly disabled via build config 00:01:06.791 stack: explicitly disabled via build config 00:01:06.791 ipsec: explicitly disabled via build config 00:01:06.791 pdcp: explicitly disabled via build config 00:01:06.791 fib: explicitly disabled via build config 00:01:06.791 port: explicitly disabled via build config 00:01:06.791 pdump: explicitly disabled via build config 00:01:06.791 table: explicitly disabled via build config 00:01:06.791 pipeline: explicitly disabled via build config 00:01:06.791 graph: explicitly disabled via build config 00:01:06.791 node: explicitly disabled via build config 00:01:06.791 00:01:06.791 drivers: 00:01:06.791 common/cpt: not in enabled drivers build config 00:01:06.791 common/dpaax: not in enabled drivers build config 00:01:06.791 common/iavf: not in enabled drivers build config 00:01:06.791 common/idpf: not in enabled drivers build config 00:01:06.791 common/mvep: not in enabled drivers build config 00:01:06.791 common/octeontx: not in enabled drivers build config 00:01:06.791 bus/auxiliary: not in enabled drivers build config 00:01:06.791 bus/cdx: not in enabled drivers build config 00:01:06.791 bus/dpaa: not in enabled drivers build config 00:01:06.791 bus/fslmc: not in enabled drivers build config 00:01:06.791 bus/ifpga: not in enabled drivers build config 00:01:06.791 bus/platform: not in enabled drivers build config 00:01:06.791 bus/vmbus: not in enabled drivers build config 00:01:06.791 common/cnxk: not in enabled drivers build config 00:01:06.791 common/mlx5: not in enabled drivers build config 00:01:06.791 common/nfp: not in enabled drivers build config 00:01:06.791 common/qat: not in enabled drivers build config 00:01:06.791 common/sfc_efx: not in enabled drivers build config 00:01:06.791 mempool/bucket: not in enabled drivers build config 00:01:06.791 mempool/cnxk: not in enabled drivers build config 00:01:06.791 mempool/dpaa: not in enabled drivers build config 00:01:06.791 mempool/dpaa2: not in enabled drivers build config 00:01:06.791 mempool/octeontx: not in enabled drivers build config 00:01:06.791 mempool/stack: not in enabled drivers build config 00:01:06.791 dma/cnxk: not in enabled drivers build config 00:01:06.791 dma/dpaa: not in enabled drivers build config 00:01:06.791 dma/dpaa2: not in enabled drivers build config 00:01:06.791 dma/hisilicon: not in enabled drivers build config 00:01:06.791 dma/idxd: not in enabled drivers build config 00:01:06.791 dma/ioat: not in enabled drivers build config 00:01:06.791 dma/skeleton: not in enabled drivers build config 00:01:06.791 net/af_packet: not in enabled drivers build config 00:01:06.791 net/af_xdp: not in enabled drivers build config 00:01:06.791 net/ark: not in enabled drivers build config 00:01:06.791 net/atlantic: not in enabled drivers build config 00:01:06.791 net/avp: not in enabled drivers build config 00:01:06.791 net/axgbe: not in enabled drivers build config 00:01:06.791 net/bnx2x: not in enabled drivers build config 00:01:06.791 net/bnxt: not in enabled drivers build config 00:01:06.791 net/bonding: not in enabled drivers build config 00:01:06.791 net/cnxk: not in enabled drivers build config 00:01:06.791 net/cpfl: not in enabled drivers build config 00:01:06.791 net/cxgbe: not in enabled drivers build config 00:01:06.791 net/dpaa: not in enabled drivers build config 00:01:06.791 net/dpaa2: not in enabled drivers build config 00:01:06.791 net/e1000: not in enabled drivers build config 00:01:06.791 net/ena: not in enabled drivers build config 00:01:06.791 net/enetc: not in enabled drivers build config 00:01:06.791 net/enetfec: not in enabled drivers build config 00:01:06.791 net/enic: not in enabled drivers build config 00:01:06.791 net/failsafe: not in enabled drivers build config 00:01:06.791 net/fm10k: not in enabled drivers build config 00:01:06.791 net/gve: not in enabled drivers build config 00:01:06.791 net/hinic: not in enabled drivers build config 00:01:06.791 net/hns3: not in enabled drivers build config 00:01:06.791 net/i40e: not in enabled drivers build config 00:01:06.791 net/iavf: not in enabled drivers build config 00:01:06.791 net/ice: not in enabled drivers build config 00:01:06.791 net/idpf: not in enabled drivers build config 00:01:06.791 net/igc: not in enabled drivers build config 00:01:06.791 net/ionic: not in enabled drivers build config 00:01:06.791 net/ipn3ke: not in enabled drivers build config 00:01:06.791 net/ixgbe: not in enabled drivers build config 00:01:06.791 net/mana: not in enabled drivers build config 00:01:06.791 net/memif: not in enabled drivers build config 00:01:06.791 net/mlx4: not in enabled drivers build config 00:01:06.791 net/mlx5: not in enabled drivers build config 00:01:06.791 net/mvneta: not in enabled drivers build config 00:01:06.791 net/mvpp2: not in enabled drivers build config 00:01:06.791 net/netvsc: not in enabled drivers build config 00:01:06.791 net/nfb: not in enabled drivers build config 00:01:06.791 net/nfp: not in enabled drivers build config 00:01:06.791 net/ngbe: not in enabled drivers build config 00:01:06.791 net/null: not in enabled drivers build config 00:01:06.791 net/octeontx: not in enabled drivers build config 00:01:06.791 net/octeon_ep: not in enabled drivers build config 00:01:06.791 net/pcap: not in enabled drivers build config 00:01:06.791 net/pfe: not in enabled drivers build config 00:01:06.791 net/qede: not in enabled drivers build config 00:01:06.791 net/ring: not in enabled drivers build config 00:01:06.791 net/sfc: not in enabled drivers build config 00:01:06.791 net/softnic: not in enabled drivers build config 00:01:06.791 net/tap: not in enabled drivers build config 00:01:06.791 net/thunderx: not in enabled drivers build config 00:01:06.791 net/txgbe: not in enabled drivers build config 00:01:06.791 net/vdev_netvsc: not in enabled drivers build config 00:01:06.791 net/vhost: not in enabled drivers build config 00:01:06.791 net/virtio: not in enabled drivers build config 00:01:06.791 net/vmxnet3: not in enabled drivers build config 00:01:06.791 raw/*: missing internal dependency, "rawdev" 00:01:06.791 crypto/armv8: not in enabled drivers build config 00:01:06.791 crypto/bcmfs: not in enabled drivers build config 00:01:06.791 crypto/caam_jr: not in enabled drivers build config 00:01:06.791 crypto/ccp: not in enabled drivers build config 00:01:06.791 crypto/cnxk: not in enabled drivers build config 00:01:06.791 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.791 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.792 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.792 crypto/mlx5: not in enabled drivers build config 00:01:06.792 crypto/mvsam: not in enabled drivers build config 00:01:06.792 crypto/nitrox: not in enabled drivers build config 00:01:06.792 crypto/null: not in enabled drivers build config 00:01:06.792 crypto/octeontx: not in enabled drivers build config 00:01:06.792 crypto/openssl: not in enabled drivers build config 00:01:06.792 crypto/scheduler: not in enabled drivers build config 00:01:06.792 crypto/uadk: not in enabled drivers build config 00:01:06.792 crypto/virtio: not in enabled drivers build config 00:01:06.792 compress/isal: not in enabled drivers build config 00:01:06.792 compress/mlx5: not in enabled drivers build config 00:01:06.792 compress/octeontx: not in enabled drivers build config 00:01:06.792 compress/zlib: not in enabled drivers build config 00:01:06.792 regex/*: missing internal dependency, "regexdev" 00:01:06.792 ml/*: missing internal dependency, "mldev" 00:01:06.792 vdpa/ifc: not in enabled drivers build config 00:01:06.792 vdpa/mlx5: not in enabled drivers build config 00:01:06.792 vdpa/nfp: not in enabled drivers build config 00:01:06.792 vdpa/sfc: not in enabled drivers build config 00:01:06.792 event/*: missing internal dependency, "eventdev" 00:01:06.792 baseband/*: missing internal dependency, "bbdev" 00:01:06.792 gpu/*: missing internal dependency, "gpudev" 00:01:06.792 00:01:06.792 00:01:07.050 Build targets in project: 85 00:01:07.050 00:01:07.050 DPDK 23.11.0 00:01:07.050 00:01:07.050 User defined options 00:01:07.050 buildtype : debug 00:01:07.050 default_library : shared 00:01:07.050 libdir : lib 00:01:07.050 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:07.050 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:07.050 c_link_args : 00:01:07.050 cpu_instruction_set: native 00:01:07.050 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:07.050 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:07.050 enable_docs : false 00:01:07.050 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:07.050 enable_kmods : false 00:01:07.050 tests : false 00:01:07.050 00:01:07.050 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.625 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:07.625 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.625 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.625 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.625 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.625 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.625 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.625 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.625 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.625 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.625 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.625 [11/265] Linking static target lib/librte_kvargs.a 00:01:07.625 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.625 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.625 [14/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.884 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.884 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.884 [17/265] Linking static target lib/librte_log.a 00:01:07.884 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.884 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.884 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:08.148 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.409 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.409 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:08.409 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:08.673 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:08.673 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:08.673 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:08.673 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:08.673 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:08.673 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:08.673 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:08.673 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:08.673 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:08.673 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:08.673 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:08.673 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:08.673 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:08.673 [38/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:08.673 [39/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:08.673 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:08.673 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:08.673 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:08.673 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:08.673 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:08.673 [45/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:08.673 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:08.674 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:08.674 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.674 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:08.674 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:08.674 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:08.674 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:08.674 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:08.674 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:08.674 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:08.674 [56/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:08.674 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:08.674 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.674 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:08.674 [60/265] Linking static target lib/librte_telemetry.a 00:01:08.674 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.674 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.674 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.674 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:08.674 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:08.674 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.935 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.935 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:08.935 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:08.935 [70/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.935 [71/265] Linking static target lib/librte_pci.a 00:01:08.935 [72/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:08.935 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:08.935 [74/265] Linking target lib/librte_log.so.24.0 00:01:08.935 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:08.935 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:09.199 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:09.199 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:09.200 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:09.200 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:09.200 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:09.200 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:09.200 [83/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:09.200 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:09.200 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:09.200 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:09.200 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:09.200 [88/265] Linking target lib/librte_kvargs.so.24.0 00:01:09.458 [89/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:09.458 [90/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:09.458 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:09.458 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:09.458 [93/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.458 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:09.458 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:09.458 [96/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:09.458 [97/265] Linking static target lib/librte_ring.a 00:01:09.458 [98/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:09.717 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:09.717 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:09.717 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:09.717 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:09.717 [103/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:09.717 [104/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:09.717 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:09.717 [106/265] Linking static target lib/librte_meter.a 00:01:09.717 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:09.717 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:09.717 [109/265] Linking static target lib/librte_eal.a 00:01:09.717 [110/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:09.717 [111/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.717 [112/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:09.717 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:09.717 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:09.717 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:09.717 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:09.717 [117/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:09.717 [118/265] Linking target lib/librte_telemetry.so.24.0 00:01:09.717 [119/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:09.977 [120/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:09.977 [121/265] Linking static target lib/librte_rcu.a 00:01:09.977 [122/265] Linking static target lib/librte_mempool.a 00:01:09.977 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:09.977 [124/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:09.977 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:09.977 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:09.977 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:09.977 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:09.977 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:09.977 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:09.977 [131/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:09.977 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:09.977 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:09.977 [134/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:10.242 [135/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:10.242 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:10.242 [137/265] Linking static target lib/librte_cmdline.a 00:01:10.242 [138/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.242 [139/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:10.242 [140/265] Linking static target lib/librte_net.a 00:01:10.242 [141/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.242 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:10.242 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:10.242 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:10.242 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:10.242 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:10.242 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:10.501 [148/265] Linking static target lib/librte_timer.a 00:01:10.502 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:10.502 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.502 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:10.502 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:10.502 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:10.760 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:10.760 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:10.760 [156/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:10.760 [157/265] Linking static target lib/librte_dmadev.a 00:01:10.760 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:10.760 [159/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.760 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:10.760 [161/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:10.760 [162/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:10.760 [163/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:10.760 [164/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.760 [165/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:10.760 [166/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.760 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:10.760 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:11.019 [169/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:11.019 [170/265] Linking static target lib/librte_power.a 00:01:11.019 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:11.019 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:11.019 [173/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:11.019 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:11.019 [175/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:11.019 [176/265] Linking static target lib/librte_compressdev.a 00:01:11.019 [177/265] Linking static target lib/librte_hash.a 00:01:11.019 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:11.019 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:11.019 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.019 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:11.019 [182/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.019 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:11.019 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:11.019 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:11.019 [186/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:11.277 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:11.277 [188/265] Linking static target lib/librte_reorder.a 00:01:11.277 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:11.277 [190/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:11.277 [191/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:11.277 [192/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:11.277 [193/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:11.277 [194/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:11.277 [195/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:11.277 [196/265] Linking static target lib/librte_mbuf.a 00:01:11.277 [197/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.277 [198/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.277 [199/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.277 [200/265] Linking static target drivers/librte_bus_pci.a 00:01:11.535 [201/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:11.535 [202/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.536 [203/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.536 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:11.536 [205/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.536 [206/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:11.536 [207/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:11.536 [208/265] Linking static target lib/librte_security.a 00:01:11.536 [209/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:11.536 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:11.536 [211/265] Linking static target drivers/librte_bus_vdev.a 00:01:11.536 [212/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:11.536 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.536 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.536 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:11.794 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.794 [217/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.794 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.794 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:11.794 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.794 [221/265] Linking static target lib/librte_ethdev.a 00:01:12.053 [222/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.053 [223/265] Linking static target lib/librte_cryptodev.a 00:01:12.988 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.923 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.823 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.823 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.082 [228/265] Linking target lib/librte_eal.so.24.0 00:01:16.082 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:16.082 [230/265] Linking target lib/librte_ring.so.24.0 00:01:16.082 [231/265] Linking target lib/librte_timer.so.24.0 00:01:16.082 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:16.082 [233/265] Linking target lib/librte_pci.so.24.0 00:01:16.082 [234/265] Linking target lib/librte_meter.so.24.0 00:01:16.082 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:16.340 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:16.340 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:16.340 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:16.340 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:16.340 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:16.340 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:16.340 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:16.340 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:16.340 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:16.340 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:16.599 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:16.599 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:16.599 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:16.599 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:16.599 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:16.599 [251/265] Linking target lib/librte_net.so.24.0 00:01:16.599 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:16.857 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:16.857 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:16.857 [255/265] Linking target lib/librte_hash.so.24.0 00:01:16.857 [256/265] Linking target lib/librte_security.so.24.0 00:01:16.857 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:16.857 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:16.857 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:17.115 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:17.115 [261/265] Linking target lib/librte_power.so.24.0 00:01:19.642 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:19.642 [263/265] Linking static target lib/librte_vhost.a 00:01:20.577 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.577 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:20.577 INFO: autodetecting backend as ninja 00:01:20.577 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:21.544 CC lib/ut_mock/mock.o 00:01:21.544 CC lib/log/log.o 00:01:21.544 CC lib/log/log_flags.o 00:01:21.544 CC lib/ut/ut.o 00:01:21.544 CC lib/log/log_deprecated.o 00:01:21.544 LIB libspdk_ut_mock.a 00:01:21.544 SO libspdk_ut_mock.so.6.0 00:01:21.544 LIB libspdk_ut.a 00:01:21.544 LIB libspdk_log.a 00:01:21.544 SO libspdk_ut.so.2.0 00:01:21.544 SO libspdk_log.so.7.0 00:01:21.544 SYMLINK libspdk_ut_mock.so 00:01:21.544 SYMLINK libspdk_ut.so 00:01:21.544 SYMLINK libspdk_log.so 00:01:21.802 CXX lib/trace_parser/trace.o 00:01:21.802 CC lib/dma/dma.o 00:01:21.802 CC lib/ioat/ioat.o 00:01:21.802 CC lib/util/base64.o 00:01:21.802 CC lib/util/bit_array.o 00:01:21.802 CC lib/util/cpuset.o 00:01:21.802 CC lib/util/crc16.o 00:01:21.802 CC lib/util/crc32.o 00:01:21.802 CC lib/util/crc32c.o 00:01:21.802 CC lib/util/crc32_ieee.o 00:01:21.802 CC lib/util/crc64.o 00:01:21.802 CC lib/util/dif.o 00:01:21.802 CC lib/util/fd.o 00:01:21.802 CC lib/util/file.o 00:01:21.802 CC lib/util/hexlify.o 00:01:21.802 CC lib/util/iov.o 00:01:21.802 CC lib/util/math.o 00:01:21.802 CC lib/util/pipe.o 00:01:21.802 CC lib/util/strerror_tls.o 00:01:21.802 CC lib/util/string.o 00:01:21.802 CC lib/util/uuid.o 00:01:21.802 CC lib/util/fd_group.o 00:01:21.802 CC lib/util/xor.o 00:01:21.802 CC lib/util/zipf.o 00:01:22.060 CC lib/vfio_user/host/vfio_user_pci.o 00:01:22.060 CC lib/vfio_user/host/vfio_user.o 00:01:22.060 LIB libspdk_dma.a 00:01:22.060 SO libspdk_dma.so.4.0 00:01:22.060 SYMLINK libspdk_dma.so 00:01:22.060 LIB libspdk_ioat.a 00:01:22.060 SO libspdk_ioat.so.7.0 00:01:22.319 SYMLINK libspdk_ioat.so 00:01:22.319 LIB libspdk_vfio_user.a 00:01:22.319 SO libspdk_vfio_user.so.5.0 00:01:22.319 SYMLINK libspdk_vfio_user.so 00:01:22.319 LIB libspdk_util.a 00:01:22.577 SO libspdk_util.so.9.0 00:01:22.577 SYMLINK libspdk_util.so 00:01:22.835 CC lib/idxd/idxd.o 00:01:22.835 CC lib/rdma/common.o 00:01:22.835 CC lib/env_dpdk/env.o 00:01:22.835 CC lib/conf/conf.o 00:01:22.835 CC lib/json/json_parse.o 00:01:22.835 CC lib/rdma/rdma_verbs.o 00:01:22.835 CC lib/env_dpdk/memory.o 00:01:22.835 CC lib/json/json_util.o 00:01:22.835 CC lib/vmd/led.o 00:01:22.835 CC lib/idxd/idxd_user.o 00:01:22.835 CC lib/vmd/vmd.o 00:01:22.835 CC lib/env_dpdk/pci.o 00:01:22.835 CC lib/json/json_write.o 00:01:22.835 CC lib/env_dpdk/init.o 00:01:22.835 CC lib/env_dpdk/threads.o 00:01:22.835 CC lib/env_dpdk/pci_ioat.o 00:01:22.835 CC lib/env_dpdk/pci_virtio.o 00:01:22.835 CC lib/env_dpdk/pci_vmd.o 00:01:22.835 CC lib/env_dpdk/pci_idxd.o 00:01:22.835 CC lib/env_dpdk/pci_event.o 00:01:22.835 CC lib/env_dpdk/sigbus_handler.o 00:01:22.835 CC lib/env_dpdk/pci_dpdk.o 00:01:22.835 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:22.835 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:22.835 LIB libspdk_trace_parser.a 00:01:22.835 SO libspdk_trace_parser.so.5.0 00:01:23.094 SYMLINK libspdk_trace_parser.so 00:01:23.094 LIB libspdk_conf.a 00:01:23.094 SO libspdk_conf.so.6.0 00:01:23.094 LIB libspdk_json.a 00:01:23.094 LIB libspdk_rdma.a 00:01:23.094 SYMLINK libspdk_conf.so 00:01:23.094 SO libspdk_json.so.6.0 00:01:23.094 SO libspdk_rdma.so.6.0 00:01:23.094 SYMLINK libspdk_json.so 00:01:23.094 SYMLINK libspdk_rdma.so 00:01:23.352 CC lib/jsonrpc/jsonrpc_server.o 00:01:23.352 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:23.352 CC lib/jsonrpc/jsonrpc_client.o 00:01:23.352 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:23.352 LIB libspdk_idxd.a 00:01:23.352 SO libspdk_idxd.so.12.0 00:01:23.352 LIB libspdk_vmd.a 00:01:23.352 SYMLINK libspdk_idxd.so 00:01:23.352 SO libspdk_vmd.so.6.0 00:01:23.611 SYMLINK libspdk_vmd.so 00:01:23.611 LIB libspdk_jsonrpc.a 00:01:23.611 SO libspdk_jsonrpc.so.6.0 00:01:23.611 SYMLINK libspdk_jsonrpc.so 00:01:23.869 CC lib/rpc/rpc.o 00:01:24.127 LIB libspdk_rpc.a 00:01:24.127 SO libspdk_rpc.so.6.0 00:01:24.127 SYMLINK libspdk_rpc.so 00:01:24.385 CC lib/notify/notify.o 00:01:24.385 CC lib/keyring/keyring.o 00:01:24.385 CC lib/trace/trace.o 00:01:24.385 CC lib/trace/trace_flags.o 00:01:24.385 CC lib/keyring/keyring_rpc.o 00:01:24.385 CC lib/notify/notify_rpc.o 00:01:24.385 CC lib/trace/trace_rpc.o 00:01:24.385 LIB libspdk_notify.a 00:01:24.385 SO libspdk_notify.so.6.0 00:01:24.643 SYMLINK libspdk_notify.so 00:01:24.643 LIB libspdk_keyring.a 00:01:24.643 LIB libspdk_trace.a 00:01:24.643 SO libspdk_keyring.so.1.0 00:01:24.643 SO libspdk_trace.so.10.0 00:01:24.643 SYMLINK libspdk_keyring.so 00:01:24.643 SYMLINK libspdk_trace.so 00:01:24.643 LIB libspdk_env_dpdk.a 00:01:24.901 SO libspdk_env_dpdk.so.14.0 00:01:24.902 CC lib/sock/sock.o 00:01:24.902 CC lib/sock/sock_rpc.o 00:01:24.902 CC lib/thread/thread.o 00:01:24.902 CC lib/thread/iobuf.o 00:01:24.902 SYMLINK libspdk_env_dpdk.so 00:01:25.160 LIB libspdk_sock.a 00:01:25.160 SO libspdk_sock.so.9.0 00:01:25.160 SYMLINK libspdk_sock.so 00:01:25.418 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:25.418 CC lib/nvme/nvme_ctrlr.o 00:01:25.418 CC lib/nvme/nvme_fabric.o 00:01:25.418 CC lib/nvme/nvme_ns_cmd.o 00:01:25.418 CC lib/nvme/nvme_ns.o 00:01:25.418 CC lib/nvme/nvme_pcie_common.o 00:01:25.418 CC lib/nvme/nvme_pcie.o 00:01:25.418 CC lib/nvme/nvme_qpair.o 00:01:25.418 CC lib/nvme/nvme.o 00:01:25.418 CC lib/nvme/nvme_quirks.o 00:01:25.418 CC lib/nvme/nvme_transport.o 00:01:25.418 CC lib/nvme/nvme_discovery.o 00:01:25.418 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:25.418 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:25.418 CC lib/nvme/nvme_tcp.o 00:01:25.418 CC lib/nvme/nvme_opal.o 00:01:25.418 CC lib/nvme/nvme_io_msg.o 00:01:25.418 CC lib/nvme/nvme_poll_group.o 00:01:25.418 CC lib/nvme/nvme_zns.o 00:01:25.418 CC lib/nvme/nvme_stubs.o 00:01:25.418 CC lib/nvme/nvme_auth.o 00:01:25.418 CC lib/nvme/nvme_cuse.o 00:01:25.418 CC lib/nvme/nvme_rdma.o 00:01:26.354 LIB libspdk_thread.a 00:01:26.354 SO libspdk_thread.so.10.0 00:01:26.354 SYMLINK libspdk_thread.so 00:01:26.613 CC lib/accel/accel.o 00:01:26.613 CC lib/virtio/virtio.o 00:01:26.613 CC lib/init/json_config.o 00:01:26.613 CC lib/accel/accel_rpc.o 00:01:26.613 CC lib/virtio/virtio_vhost_user.o 00:01:26.613 CC lib/init/subsystem.o 00:01:26.613 CC lib/accel/accel_sw.o 00:01:26.613 CC lib/virtio/virtio_vfio_user.o 00:01:26.613 CC lib/init/subsystem_rpc.o 00:01:26.613 CC lib/virtio/virtio_pci.o 00:01:26.613 CC lib/init/rpc.o 00:01:26.613 CC lib/blob/blobstore.o 00:01:26.613 CC lib/blob/request.o 00:01:26.613 CC lib/blob/zeroes.o 00:01:26.613 CC lib/blob/blob_bs_dev.o 00:01:26.872 LIB libspdk_init.a 00:01:26.872 SO libspdk_init.so.5.0 00:01:26.872 LIB libspdk_virtio.a 00:01:26.872 SYMLINK libspdk_init.so 00:01:27.130 SO libspdk_virtio.so.7.0 00:01:27.130 SYMLINK libspdk_virtio.so 00:01:27.130 CC lib/event/app.o 00:01:27.130 CC lib/event/reactor.o 00:01:27.130 CC lib/event/log_rpc.o 00:01:27.130 CC lib/event/app_rpc.o 00:01:27.130 CC lib/event/scheduler_static.o 00:01:27.697 LIB libspdk_event.a 00:01:27.697 SO libspdk_event.so.13.0 00:01:27.697 SYMLINK libspdk_event.so 00:01:27.697 LIB libspdk_accel.a 00:01:27.697 SO libspdk_accel.so.15.0 00:01:27.697 SYMLINK libspdk_accel.so 00:01:27.697 LIB libspdk_nvme.a 00:01:27.955 SO libspdk_nvme.so.13.0 00:01:27.955 CC lib/bdev/bdev.o 00:01:27.955 CC lib/bdev/bdev_rpc.o 00:01:27.955 CC lib/bdev/bdev_zone.o 00:01:27.955 CC lib/bdev/part.o 00:01:27.955 CC lib/bdev/scsi_nvme.o 00:01:28.213 SYMLINK libspdk_nvme.so 00:01:30.112 LIB libspdk_blob.a 00:01:30.113 SO libspdk_blob.so.11.0 00:01:30.113 SYMLINK libspdk_blob.so 00:01:30.113 CC lib/blobfs/blobfs.o 00:01:30.113 CC lib/blobfs/tree.o 00:01:30.113 CC lib/lvol/lvol.o 00:01:30.397 LIB libspdk_bdev.a 00:01:30.397 SO libspdk_bdev.so.15.0 00:01:30.397 SYMLINK libspdk_bdev.so 00:01:30.665 CC lib/nbd/nbd.o 00:01:30.665 CC lib/ublk/ublk.o 00:01:30.665 CC lib/nvmf/ctrlr.o 00:01:30.665 CC lib/scsi/dev.o 00:01:30.665 CC lib/nbd/nbd_rpc.o 00:01:30.665 CC lib/nvmf/ctrlr_discovery.o 00:01:30.665 CC lib/ublk/ublk_rpc.o 00:01:30.665 CC lib/ftl/ftl_core.o 00:01:30.665 CC lib/scsi/lun.o 00:01:30.665 CC lib/nvmf/ctrlr_bdev.o 00:01:30.665 CC lib/ftl/ftl_init.o 00:01:30.665 CC lib/scsi/port.o 00:01:30.665 CC lib/nvmf/subsystem.o 00:01:30.665 CC lib/ftl/ftl_layout.o 00:01:30.665 CC lib/ftl/ftl_debug.o 00:01:30.665 CC lib/ftl/ftl_io.o 00:01:30.665 CC lib/nvmf/nvmf.o 00:01:30.665 CC lib/scsi/scsi.o 00:01:30.665 CC lib/ftl/ftl_sb.o 00:01:30.665 CC lib/ftl/ftl_l2p.o 00:01:30.665 CC lib/scsi/scsi_bdev.o 00:01:30.665 CC lib/nvmf/transport.o 00:01:30.665 CC lib/nvmf/nvmf_rpc.o 00:01:30.665 CC lib/scsi/scsi_pr.o 00:01:30.665 CC lib/scsi/scsi_rpc.o 00:01:30.665 CC lib/ftl/ftl_l2p_flat.o 00:01:30.665 CC lib/ftl/ftl_nv_cache.o 00:01:30.665 CC lib/nvmf/tcp.o 00:01:30.665 CC lib/nvmf/stubs.o 00:01:30.665 CC lib/scsi/task.o 00:01:30.665 CC lib/ftl/ftl_band_ops.o 00:01:30.665 CC lib/ftl/ftl_band.o 00:01:30.665 CC lib/nvmf/mdns_server.o 00:01:30.665 CC lib/ftl/ftl_writer.o 00:01:30.665 CC lib/nvmf/rdma.o 00:01:30.665 CC lib/ftl/ftl_rq.o 00:01:30.665 CC lib/nvmf/auth.o 00:01:30.665 CC lib/ftl/ftl_reloc.o 00:01:30.665 CC lib/ftl/ftl_l2p_cache.o 00:01:30.665 CC lib/ftl/ftl_p2l.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:30.665 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:30.924 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:31.188 LIB libspdk_blobfs.a 00:01:31.188 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:31.189 CC lib/ftl/utils/ftl_conf.o 00:01:31.189 CC lib/ftl/utils/ftl_md.o 00:01:31.189 CC lib/ftl/utils/ftl_mempool.o 00:01:31.189 CC lib/ftl/utils/ftl_bitmap.o 00:01:31.189 CC lib/ftl/utils/ftl_property.o 00:01:31.189 SO libspdk_blobfs.so.10.0 00:01:31.189 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:31.189 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:31.189 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:31.189 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:31.189 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:31.189 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:31.189 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:31.189 SYMLINK libspdk_blobfs.so 00:01:31.189 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:31.448 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:31.448 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:31.448 CC lib/ftl/base/ftl_base_dev.o 00:01:31.448 CC lib/ftl/base/ftl_base_bdev.o 00:01:31.448 LIB libspdk_lvol.a 00:01:31.448 CC lib/ftl/ftl_trace.o 00:01:31.448 SO libspdk_lvol.so.10.0 00:01:31.448 LIB libspdk_nbd.a 00:01:31.448 SYMLINK libspdk_lvol.so 00:01:31.448 SO libspdk_nbd.so.7.0 00:01:31.448 LIB libspdk_scsi.a 00:01:31.707 SO libspdk_scsi.so.9.0 00:01:31.707 SYMLINK libspdk_nbd.so 00:01:31.707 SYMLINK libspdk_scsi.so 00:01:31.707 LIB libspdk_ublk.a 00:01:31.707 SO libspdk_ublk.so.3.0 00:01:31.707 SYMLINK libspdk_ublk.so 00:01:31.966 CC lib/iscsi/conn.o 00:01:31.966 CC lib/vhost/vhost.o 00:01:31.966 CC lib/vhost/vhost_rpc.o 00:01:31.966 CC lib/iscsi/init_grp.o 00:01:31.966 CC lib/vhost/vhost_scsi.o 00:01:31.966 CC lib/iscsi/iscsi.o 00:01:31.966 CC lib/vhost/vhost_blk.o 00:01:31.966 CC lib/iscsi/md5.o 00:01:31.966 CC lib/vhost/rte_vhost_user.o 00:01:31.966 CC lib/iscsi/param.o 00:01:31.966 CC lib/iscsi/portal_grp.o 00:01:31.966 CC lib/iscsi/tgt_node.o 00:01:31.966 CC lib/iscsi/iscsi_subsystem.o 00:01:31.966 CC lib/iscsi/iscsi_rpc.o 00:01:31.966 CC lib/iscsi/task.o 00:01:32.224 LIB libspdk_ftl.a 00:01:32.224 SO libspdk_ftl.so.9.0 00:01:32.482 SYMLINK libspdk_ftl.so 00:01:33.048 LIB libspdk_vhost.a 00:01:33.048 SO libspdk_vhost.so.8.0 00:01:33.307 SYMLINK libspdk_vhost.so 00:01:33.307 LIB libspdk_nvmf.a 00:01:33.307 LIB libspdk_iscsi.a 00:01:33.307 SO libspdk_nvmf.so.18.0 00:01:33.307 SO libspdk_iscsi.so.8.0 00:01:33.564 SYMLINK libspdk_iscsi.so 00:01:33.565 SYMLINK libspdk_nvmf.so 00:01:33.823 CC module/env_dpdk/env_dpdk_rpc.o 00:01:33.823 CC module/keyring/file/keyring.o 00:01:33.823 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:33.823 CC module/accel/error/accel_error.o 00:01:33.823 CC module/keyring/file/keyring_rpc.o 00:01:33.823 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:33.823 CC module/blob/bdev/blob_bdev.o 00:01:33.823 CC module/scheduler/gscheduler/gscheduler.o 00:01:33.823 CC module/sock/posix/posix.o 00:01:33.823 CC module/accel/iaa/accel_iaa.o 00:01:33.823 CC module/accel/dsa/accel_dsa.o 00:01:33.823 CC module/accel/ioat/accel_ioat.o 00:01:33.823 CC module/accel/error/accel_error_rpc.o 00:01:33.823 CC module/accel/dsa/accel_dsa_rpc.o 00:01:33.823 CC module/accel/iaa/accel_iaa_rpc.o 00:01:33.823 CC module/accel/ioat/accel_ioat_rpc.o 00:01:33.823 LIB libspdk_env_dpdk_rpc.a 00:01:33.823 SO libspdk_env_dpdk_rpc.so.6.0 00:01:34.081 SYMLINK libspdk_env_dpdk_rpc.so 00:01:34.081 LIB libspdk_keyring_file.a 00:01:34.081 LIB libspdk_scheduler_gscheduler.a 00:01:34.081 LIB libspdk_scheduler_dpdk_governor.a 00:01:34.081 SO libspdk_scheduler_gscheduler.so.4.0 00:01:34.081 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:34.081 SO libspdk_keyring_file.so.1.0 00:01:34.081 LIB libspdk_accel_error.a 00:01:34.081 LIB libspdk_accel_ioat.a 00:01:34.081 LIB libspdk_scheduler_dynamic.a 00:01:34.081 SO libspdk_accel_error.so.2.0 00:01:34.081 LIB libspdk_accel_iaa.a 00:01:34.081 SYMLINK libspdk_scheduler_gscheduler.so 00:01:34.081 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:34.081 SO libspdk_scheduler_dynamic.so.4.0 00:01:34.081 SO libspdk_accel_ioat.so.6.0 00:01:34.081 SYMLINK libspdk_keyring_file.so 00:01:34.081 SO libspdk_accel_iaa.so.3.0 00:01:34.081 LIB libspdk_accel_dsa.a 00:01:34.081 SYMLINK libspdk_accel_error.so 00:01:34.081 SYMLINK libspdk_scheduler_dynamic.so 00:01:34.081 SO libspdk_accel_dsa.so.5.0 00:01:34.081 LIB libspdk_blob_bdev.a 00:01:34.081 SYMLINK libspdk_accel_ioat.so 00:01:34.081 SYMLINK libspdk_accel_iaa.so 00:01:34.081 SO libspdk_blob_bdev.so.11.0 00:01:34.081 SYMLINK libspdk_accel_dsa.so 00:01:34.338 SYMLINK libspdk_blob_bdev.so 00:01:34.596 CC module/blobfs/bdev/blobfs_bdev.o 00:01:34.596 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:34.596 CC module/bdev/null/bdev_null.o 00:01:34.596 CC module/bdev/null/bdev_null_rpc.o 00:01:34.596 CC module/bdev/gpt/gpt.o 00:01:34.596 CC module/bdev/passthru/vbdev_passthru.o 00:01:34.596 CC module/bdev/gpt/vbdev_gpt.o 00:01:34.596 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:34.596 CC module/bdev/malloc/bdev_malloc.o 00:01:34.596 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:34.596 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:34.596 CC module/bdev/lvol/vbdev_lvol.o 00:01:34.596 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:34.596 CC module/bdev/nvme/bdev_nvme.o 00:01:34.596 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:34.596 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:34.596 CC module/bdev/aio/bdev_aio.o 00:01:34.596 CC module/bdev/raid/bdev_raid.o 00:01:34.596 CC module/bdev/error/vbdev_error.o 00:01:34.596 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:34.596 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:34.596 CC module/bdev/error/vbdev_error_rpc.o 00:01:34.596 CC module/bdev/raid/bdev_raid_rpc.o 00:01:34.596 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:34.596 CC module/bdev/raid/bdev_raid_sb.o 00:01:34.596 CC module/bdev/aio/bdev_aio_rpc.o 00:01:34.596 CC module/bdev/nvme/nvme_rpc.o 00:01:34.596 CC module/bdev/ftl/bdev_ftl.o 00:01:34.596 CC module/bdev/iscsi/bdev_iscsi.o 00:01:34.596 CC module/bdev/nvme/bdev_mdns_client.o 00:01:34.596 CC module/bdev/raid/raid0.o 00:01:34.596 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:34.596 CC module/bdev/delay/vbdev_delay.o 00:01:34.596 CC module/bdev/split/vbdev_split.o 00:01:34.596 CC module/bdev/nvme/vbdev_opal.o 00:01:34.596 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:34.596 CC module/bdev/raid/raid1.o 00:01:34.596 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:34.596 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:34.596 CC module/bdev/split/vbdev_split_rpc.o 00:01:34.596 CC module/bdev/raid/concat.o 00:01:34.596 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:34.854 LIB libspdk_sock_posix.a 00:01:34.854 SO libspdk_sock_posix.so.6.0 00:01:34.854 LIB libspdk_blobfs_bdev.a 00:01:34.854 SO libspdk_blobfs_bdev.so.6.0 00:01:34.854 SYMLINK libspdk_sock_posix.so 00:01:34.854 LIB libspdk_bdev_split.a 00:01:34.854 SO libspdk_bdev_split.so.6.0 00:01:34.854 SYMLINK libspdk_blobfs_bdev.so 00:01:34.854 LIB libspdk_bdev_error.a 00:01:34.854 LIB libspdk_bdev_null.a 00:01:35.111 LIB libspdk_bdev_gpt.a 00:01:35.111 LIB libspdk_bdev_passthru.a 00:01:35.111 SYMLINK libspdk_bdev_split.so 00:01:35.111 SO libspdk_bdev_error.so.6.0 00:01:35.111 LIB libspdk_bdev_aio.a 00:01:35.111 SO libspdk_bdev_null.so.6.0 00:01:35.111 LIB libspdk_bdev_zone_block.a 00:01:35.111 LIB libspdk_bdev_ftl.a 00:01:35.111 SO libspdk_bdev_gpt.so.6.0 00:01:35.111 SO libspdk_bdev_passthru.so.6.0 00:01:35.111 SO libspdk_bdev_aio.so.6.0 00:01:35.111 SO libspdk_bdev_ftl.so.6.0 00:01:35.111 SO libspdk_bdev_zone_block.so.6.0 00:01:35.111 LIB libspdk_bdev_delay.a 00:01:35.111 SYMLINK libspdk_bdev_error.so 00:01:35.111 SYMLINK libspdk_bdev_null.so 00:01:35.111 SYMLINK libspdk_bdev_gpt.so 00:01:35.111 SYMLINK libspdk_bdev_passthru.so 00:01:35.111 SYMLINK libspdk_bdev_aio.so 00:01:35.111 SO libspdk_bdev_delay.so.6.0 00:01:35.111 LIB libspdk_bdev_iscsi.a 00:01:35.111 SYMLINK libspdk_bdev_zone_block.so 00:01:35.111 SYMLINK libspdk_bdev_ftl.so 00:01:35.111 LIB libspdk_bdev_malloc.a 00:01:35.111 SO libspdk_bdev_iscsi.so.6.0 00:01:35.111 SO libspdk_bdev_malloc.so.6.0 00:01:35.111 SYMLINK libspdk_bdev_delay.so 00:01:35.111 SYMLINK libspdk_bdev_iscsi.so 00:01:35.111 SYMLINK libspdk_bdev_malloc.so 00:01:35.111 LIB libspdk_bdev_virtio.a 00:01:35.369 SO libspdk_bdev_virtio.so.6.0 00:01:35.369 LIB libspdk_bdev_lvol.a 00:01:35.369 SO libspdk_bdev_lvol.so.6.0 00:01:35.369 SYMLINK libspdk_bdev_virtio.so 00:01:35.369 SYMLINK libspdk_bdev_lvol.so 00:01:35.627 LIB libspdk_bdev_raid.a 00:01:35.627 SO libspdk_bdev_raid.so.6.0 00:01:35.627 SYMLINK libspdk_bdev_raid.so 00:01:36.999 LIB libspdk_bdev_nvme.a 00:01:36.999 SO libspdk_bdev_nvme.so.7.0 00:01:36.999 SYMLINK libspdk_bdev_nvme.so 00:01:37.257 CC module/event/subsystems/iobuf/iobuf.o 00:01:37.257 CC module/event/subsystems/keyring/keyring.o 00:01:37.257 CC module/event/subsystems/scheduler/scheduler.o 00:01:37.257 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:37.257 CC module/event/subsystems/sock/sock.o 00:01:37.257 CC module/event/subsystems/vmd/vmd.o 00:01:37.257 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:37.257 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:37.516 LIB libspdk_event_keyring.a 00:01:37.516 LIB libspdk_event_sock.a 00:01:37.516 LIB libspdk_event_vhost_blk.a 00:01:37.516 LIB libspdk_event_scheduler.a 00:01:37.516 LIB libspdk_event_vmd.a 00:01:37.516 SO libspdk_event_keyring.so.1.0 00:01:37.516 SO libspdk_event_sock.so.5.0 00:01:37.516 LIB libspdk_event_iobuf.a 00:01:37.516 SO libspdk_event_vhost_blk.so.3.0 00:01:37.516 SO libspdk_event_scheduler.so.4.0 00:01:37.516 SO libspdk_event_vmd.so.6.0 00:01:37.516 SO libspdk_event_iobuf.so.3.0 00:01:37.516 SYMLINK libspdk_event_keyring.so 00:01:37.516 SYMLINK libspdk_event_sock.so 00:01:37.516 SYMLINK libspdk_event_scheduler.so 00:01:37.516 SYMLINK libspdk_event_vhost_blk.so 00:01:37.516 SYMLINK libspdk_event_vmd.so 00:01:37.516 SYMLINK libspdk_event_iobuf.so 00:01:37.774 CC module/event/subsystems/accel/accel.o 00:01:37.774 LIB libspdk_event_accel.a 00:01:37.774 SO libspdk_event_accel.so.6.0 00:01:38.032 SYMLINK libspdk_event_accel.so 00:01:38.032 CC module/event/subsystems/bdev/bdev.o 00:01:38.290 LIB libspdk_event_bdev.a 00:01:38.290 SO libspdk_event_bdev.so.6.0 00:01:38.290 SYMLINK libspdk_event_bdev.so 00:01:38.549 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:38.549 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:38.549 CC module/event/subsystems/nbd/nbd.o 00:01:38.549 CC module/event/subsystems/ublk/ublk.o 00:01:38.549 CC module/event/subsystems/scsi/scsi.o 00:01:38.549 LIB libspdk_event_nbd.a 00:01:38.549 LIB libspdk_event_ublk.a 00:01:38.807 LIB libspdk_event_scsi.a 00:01:38.807 SO libspdk_event_ublk.so.3.0 00:01:38.807 SO libspdk_event_nbd.so.6.0 00:01:38.807 SO libspdk_event_scsi.so.6.0 00:01:38.807 SYMLINK libspdk_event_ublk.so 00:01:38.807 SYMLINK libspdk_event_nbd.so 00:01:38.807 SYMLINK libspdk_event_scsi.so 00:01:38.807 LIB libspdk_event_nvmf.a 00:01:38.807 SO libspdk_event_nvmf.so.6.0 00:01:38.807 SYMLINK libspdk_event_nvmf.so 00:01:38.807 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:39.066 CC module/event/subsystems/iscsi/iscsi.o 00:01:39.066 LIB libspdk_event_vhost_scsi.a 00:01:39.066 SO libspdk_event_vhost_scsi.so.3.0 00:01:39.066 LIB libspdk_event_iscsi.a 00:01:39.066 SO libspdk_event_iscsi.so.6.0 00:01:39.066 SYMLINK libspdk_event_vhost_scsi.so 00:01:39.066 SYMLINK libspdk_event_iscsi.so 00:01:39.326 SO libspdk.so.6.0 00:01:39.326 SYMLINK libspdk.so 00:01:39.596 CXX app/trace/trace.o 00:01:39.596 CC app/trace_record/trace_record.o 00:01:39.596 CC app/spdk_nvme_identify/identify.o 00:01:39.596 CC test/rpc_client/rpc_client_test.o 00:01:39.596 CC app/spdk_top/spdk_top.o 00:01:39.596 CC app/spdk_nvme_discover/discovery_aer.o 00:01:39.596 CC app/spdk_nvme_perf/perf.o 00:01:39.596 TEST_HEADER include/spdk/accel.h 00:01:39.596 CC app/spdk_lspci/spdk_lspci.o 00:01:39.596 TEST_HEADER include/spdk/accel_module.h 00:01:39.596 TEST_HEADER include/spdk/assert.h 00:01:39.596 TEST_HEADER include/spdk/barrier.h 00:01:39.596 TEST_HEADER include/spdk/base64.h 00:01:39.596 TEST_HEADER include/spdk/bdev.h 00:01:39.596 TEST_HEADER include/spdk/bdev_module.h 00:01:39.596 TEST_HEADER include/spdk/bdev_zone.h 00:01:39.596 TEST_HEADER include/spdk/bit_array.h 00:01:39.596 TEST_HEADER include/spdk/bit_pool.h 00:01:39.596 TEST_HEADER include/spdk/blob_bdev.h 00:01:39.596 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:39.596 TEST_HEADER include/spdk/blobfs.h 00:01:39.596 TEST_HEADER include/spdk/blob.h 00:01:39.596 TEST_HEADER include/spdk/conf.h 00:01:39.596 TEST_HEADER include/spdk/config.h 00:01:39.596 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:39.596 CC app/spdk_dd/spdk_dd.o 00:01:39.596 TEST_HEADER include/spdk/cpuset.h 00:01:39.596 TEST_HEADER include/spdk/crc16.h 00:01:39.596 CC app/nvmf_tgt/nvmf_main.o 00:01:39.596 TEST_HEADER include/spdk/crc32.h 00:01:39.596 CC app/iscsi_tgt/iscsi_tgt.o 00:01:39.596 TEST_HEADER include/spdk/crc64.h 00:01:39.596 TEST_HEADER include/spdk/dif.h 00:01:39.596 CC app/vhost/vhost.o 00:01:39.596 TEST_HEADER include/spdk/dma.h 00:01:39.596 TEST_HEADER include/spdk/endian.h 00:01:39.596 TEST_HEADER include/spdk/env_dpdk.h 00:01:39.596 TEST_HEADER include/spdk/env.h 00:01:39.596 TEST_HEADER include/spdk/event.h 00:01:39.596 TEST_HEADER include/spdk/fd_group.h 00:01:39.596 TEST_HEADER include/spdk/fd.h 00:01:39.596 TEST_HEADER include/spdk/file.h 00:01:39.596 TEST_HEADER include/spdk/ftl.h 00:01:39.596 TEST_HEADER include/spdk/gpt_spec.h 00:01:39.596 TEST_HEADER include/spdk/hexlify.h 00:01:39.596 TEST_HEADER include/spdk/histogram_data.h 00:01:39.596 CC app/spdk_tgt/spdk_tgt.o 00:01:39.596 TEST_HEADER include/spdk/idxd.h 00:01:39.596 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.596 CC examples/nvme/arbitration/arbitration.o 00:01:39.596 TEST_HEADER include/spdk/idxd_spec.h 00:01:39.596 CC examples/util/zipf/zipf.o 00:01:39.596 TEST_HEADER include/spdk/init.h 00:01:39.596 CC examples/sock/hello_world/hello_sock.o 00:01:39.596 CC app/fio/nvme/fio_plugin.o 00:01:39.596 CC examples/nvme/reconnect/reconnect.o 00:01:39.596 CC examples/idxd/perf/perf.o 00:01:39.596 TEST_HEADER include/spdk/ioat.h 00:01:39.596 CC examples/nvme/abort/abort.o 00:01:39.596 CC examples/ioat/verify/verify.o 00:01:39.596 CC examples/vmd/lsvmd/lsvmd.o 00:01:39.596 TEST_HEADER include/spdk/ioat_spec.h 00:01:39.596 CC test/env/vtophys/vtophys.o 00:01:39.596 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.596 CC examples/accel/perf/accel_perf.o 00:01:39.596 CC examples/nvme/hotplug/hotplug.o 00:01:39.596 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:39.596 TEST_HEADER include/spdk/iscsi_spec.h 00:01:39.596 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.596 CC examples/ioat/perf/perf.o 00:01:39.596 CC examples/nvme/hello_world/hello_world.o 00:01:39.596 CC test/nvme/aer/aer.o 00:01:39.596 CC test/event/event_perf/event_perf.o 00:01:39.596 CC test/thread/poller_perf/poller_perf.o 00:01:39.596 TEST_HEADER include/spdk/json.h 00:01:39.596 TEST_HEADER include/spdk/jsonrpc.h 00:01:39.596 TEST_HEADER include/spdk/keyring.h 00:01:39.596 TEST_HEADER include/spdk/keyring_module.h 00:01:39.596 TEST_HEADER include/spdk/likely.h 00:01:39.596 TEST_HEADER include/spdk/log.h 00:01:39.596 TEST_HEADER include/spdk/lvol.h 00:01:39.855 TEST_HEADER include/spdk/memory.h 00:01:39.855 TEST_HEADER include/spdk/mmio.h 00:01:39.855 TEST_HEADER include/spdk/nbd.h 00:01:39.855 TEST_HEADER include/spdk/notify.h 00:01:39.855 CC examples/thread/thread/thread_ex.o 00:01:39.855 TEST_HEADER include/spdk/nvme.h 00:01:39.855 CC examples/blob/hello_world/hello_blob.o 00:01:39.855 TEST_HEADER include/spdk/nvme_intel.h 00:01:39.855 CC examples/bdev/bdevperf/bdevperf.o 00:01:39.855 CC examples/bdev/hello_world/hello_bdev.o 00:01:39.855 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:39.855 CC test/blobfs/mkfs/mkfs.o 00:01:39.855 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:39.855 CC test/accel/dif/dif.o 00:01:39.855 TEST_HEADER include/spdk/nvme_spec.h 00:01:39.855 CC test/app/bdev_svc/bdev_svc.o 00:01:39.855 CC test/dma/test_dma/test_dma.o 00:01:39.855 TEST_HEADER include/spdk/nvme_zns.h 00:01:39.855 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:39.855 CC examples/nvmf/nvmf/nvmf.o 00:01:39.855 CC test/bdev/bdevio/bdevio.o 00:01:39.855 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:39.856 TEST_HEADER include/spdk/nvmf.h 00:01:39.856 TEST_HEADER include/spdk/nvmf_spec.h 00:01:39.856 TEST_HEADER include/spdk/nvmf_transport.h 00:01:39.856 TEST_HEADER include/spdk/opal.h 00:01:39.856 TEST_HEADER include/spdk/opal_spec.h 00:01:39.856 TEST_HEADER include/spdk/pci_ids.h 00:01:39.856 TEST_HEADER include/spdk/pipe.h 00:01:39.856 TEST_HEADER include/spdk/queue.h 00:01:39.856 LINK spdk_lspci 00:01:39.856 TEST_HEADER include/spdk/reduce.h 00:01:39.856 TEST_HEADER include/spdk/rpc.h 00:01:39.856 TEST_HEADER include/spdk/scheduler.h 00:01:39.856 TEST_HEADER include/spdk/scsi.h 00:01:39.856 TEST_HEADER include/spdk/scsi_spec.h 00:01:39.856 TEST_HEADER include/spdk/sock.h 00:01:39.856 TEST_HEADER include/spdk/stdinc.h 00:01:39.856 TEST_HEADER include/spdk/string.h 00:01:39.856 CC test/lvol/esnap/esnap.o 00:01:39.856 TEST_HEADER include/spdk/thread.h 00:01:39.856 TEST_HEADER include/spdk/trace.h 00:01:39.856 CC test/env/mem_callbacks/mem_callbacks.o 00:01:39.856 TEST_HEADER include/spdk/trace_parser.h 00:01:39.856 TEST_HEADER include/spdk/tree.h 00:01:39.856 TEST_HEADER include/spdk/ublk.h 00:01:39.856 TEST_HEADER include/spdk/util.h 00:01:39.856 TEST_HEADER include/spdk/uuid.h 00:01:39.856 TEST_HEADER include/spdk/version.h 00:01:39.856 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:39.856 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:39.856 LINK rpc_client_test 00:01:39.856 TEST_HEADER include/spdk/vhost.h 00:01:39.856 TEST_HEADER include/spdk/vmd.h 00:01:39.856 TEST_HEADER include/spdk/xor.h 00:01:39.856 TEST_HEADER include/spdk/zipf.h 00:01:39.856 CXX test/cpp_headers/accel.o 00:01:39.856 LINK spdk_nvme_discover 00:01:39.856 LINK lsvmd 00:01:39.856 LINK nvmf_tgt 00:01:40.122 LINK vtophys 00:01:40.122 LINK zipf 00:01:40.122 LINK interrupt_tgt 00:01:40.122 LINK poller_perf 00:01:40.122 LINK event_perf 00:01:40.122 LINK spdk_trace_record 00:01:40.122 LINK vhost 00:01:40.122 LINK env_dpdk_post_init 00:01:40.122 LINK iscsi_tgt 00:01:40.122 LINK pmr_persistence 00:01:40.122 LINK cmb_copy 00:01:40.122 LINK spdk_tgt 00:01:40.122 LINK verify 00:01:40.122 LINK bdev_svc 00:01:40.122 LINK ioat_perf 00:01:40.122 LINK hello_world 00:01:40.122 LINK hello_sock 00:01:40.122 LINK hotplug 00:01:40.122 LINK mkfs 00:01:40.122 LINK hello_bdev 00:01:40.122 LINK thread 00:01:40.122 LINK hello_blob 00:01:40.384 LINK aer 00:01:40.384 CXX test/cpp_headers/accel_module.o 00:01:40.384 CXX test/cpp_headers/assert.o 00:01:40.384 LINK arbitration 00:01:40.384 LINK spdk_dd 00:01:40.384 LINK idxd_perf 00:01:40.384 LINK reconnect 00:01:40.384 LINK spdk_trace 00:01:40.384 LINK nvmf 00:01:40.384 LINK abort 00:01:40.384 CXX test/cpp_headers/barrier.o 00:01:40.384 CC examples/vmd/led/led.o 00:01:40.384 CC examples/blob/cli/blobcli.o 00:01:40.384 CC test/event/reactor/reactor.o 00:01:40.384 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:40.384 CC test/app/histogram_perf/histogram_perf.o 00:01:40.384 CC test/nvme/reset/reset.o 00:01:40.384 LINK dif 00:01:40.384 CXX test/cpp_headers/base64.o 00:01:40.384 CC app/fio/bdev/fio_plugin.o 00:01:40.384 LINK test_dma 00:01:40.648 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:40.648 CC test/env/memory/memory_ut.o 00:01:40.648 CC test/env/pci/pci_ut.o 00:01:40.648 LINK bdevio 00:01:40.648 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:40.648 CXX test/cpp_headers/bdev.o 00:01:40.648 CC test/app/jsoncat/jsoncat.o 00:01:40.648 CC test/nvme/sgl/sgl.o 00:01:40.648 CC test/app/stub/stub.o 00:01:40.648 LINK accel_perf 00:01:40.648 CC test/nvme/e2edp/nvme_dp.o 00:01:40.648 CC test/event/reactor_perf/reactor_perf.o 00:01:40.648 LINK nvme_manage 00:01:40.648 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:40.648 CXX test/cpp_headers/bdev_module.o 00:01:40.648 CXX test/cpp_headers/bdev_zone.o 00:01:40.648 CC test/event/app_repeat/app_repeat.o 00:01:40.648 CXX test/cpp_headers/bit_array.o 00:01:40.648 CC test/nvme/overhead/overhead.o 00:01:40.648 CC test/nvme/err_injection/err_injection.o 00:01:40.648 CXX test/cpp_headers/bit_pool.o 00:01:40.648 CC test/event/scheduler/scheduler.o 00:01:40.648 LINK led 00:01:40.648 CC test/nvme/startup/startup.o 00:01:40.911 LINK reactor 00:01:40.911 LINK spdk_nvme 00:01:40.911 LINK histogram_perf 00:01:40.911 CC test/nvme/reserve/reserve.o 00:01:40.911 CXX test/cpp_headers/blob_bdev.o 00:01:40.911 CC test/nvme/connect_stress/connect_stress.o 00:01:40.911 CC test/nvme/simple_copy/simple_copy.o 00:01:40.911 CXX test/cpp_headers/blobfs_bdev.o 00:01:40.911 CC test/nvme/boot_partition/boot_partition.o 00:01:40.911 LINK jsoncat 00:01:40.911 CC test/nvme/compliance/nvme_compliance.o 00:01:40.911 CC test/nvme/fused_ordering/fused_ordering.o 00:01:40.911 CXX test/cpp_headers/blobfs.o 00:01:40.911 LINK reactor_perf 00:01:40.911 CXX test/cpp_headers/blob.o 00:01:40.911 CXX test/cpp_headers/conf.o 00:01:40.911 CXX test/cpp_headers/config.o 00:01:40.911 CXX test/cpp_headers/cpuset.o 00:01:40.911 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:40.911 CXX test/cpp_headers/crc16.o 00:01:40.911 LINK stub 00:01:40.911 LINK reset 00:01:40.911 LINK app_repeat 00:01:40.911 CC test/nvme/fdp/fdp.o 00:01:40.911 CXX test/cpp_headers/crc32.o 00:01:40.911 LINK mem_callbacks 00:01:41.170 CXX test/cpp_headers/crc64.o 00:01:41.170 CXX test/cpp_headers/dif.o 00:01:41.170 CXX test/cpp_headers/dma.o 00:01:41.170 CXX test/cpp_headers/endian.o 00:01:41.170 LINK err_injection 00:01:41.170 CC test/nvme/cuse/cuse.o 00:01:41.170 CXX test/cpp_headers/env_dpdk.o 00:01:41.170 CXX test/cpp_headers/env.o 00:01:41.170 CXX test/cpp_headers/event.o 00:01:41.170 LINK startup 00:01:41.170 LINK spdk_nvme_perf 00:01:41.170 LINK sgl 00:01:41.170 LINK spdk_nvme_identify 00:01:41.170 LINK nvme_dp 00:01:41.170 CXX test/cpp_headers/fd_group.o 00:01:41.170 LINK connect_stress 00:01:41.170 LINK scheduler 00:01:41.170 LINK boot_partition 00:01:41.170 LINK nvme_fuzz 00:01:41.170 LINK bdevperf 00:01:41.170 CXX test/cpp_headers/fd.o 00:01:41.170 CXX test/cpp_headers/file.o 00:01:41.171 LINK spdk_top 00:01:41.171 LINK reserve 00:01:41.171 LINK overhead 00:01:41.171 CXX test/cpp_headers/ftl.o 00:01:41.171 LINK pci_ut 00:01:41.171 CXX test/cpp_headers/gpt_spec.o 00:01:41.435 CXX test/cpp_headers/hexlify.o 00:01:41.435 CXX test/cpp_headers/histogram_data.o 00:01:41.435 CXX test/cpp_headers/idxd.o 00:01:41.435 CXX test/cpp_headers/idxd_spec.o 00:01:41.435 LINK simple_copy 00:01:41.435 CXX test/cpp_headers/init.o 00:01:41.435 CXX test/cpp_headers/ioat.o 00:01:41.435 CXX test/cpp_headers/ioat_spec.o 00:01:41.435 CXX test/cpp_headers/iscsi_spec.o 00:01:41.435 LINK fused_ordering 00:01:41.435 LINK blobcli 00:01:41.435 LINK doorbell_aers 00:01:41.435 CXX test/cpp_headers/json.o 00:01:41.435 CXX test/cpp_headers/jsonrpc.o 00:01:41.435 LINK vhost_fuzz 00:01:41.435 LINK spdk_bdev 00:01:41.435 CXX test/cpp_headers/keyring.o 00:01:41.435 CXX test/cpp_headers/keyring_module.o 00:01:41.435 CXX test/cpp_headers/likely.o 00:01:41.435 CXX test/cpp_headers/log.o 00:01:41.435 CXX test/cpp_headers/memory.o 00:01:41.435 CXX test/cpp_headers/lvol.o 00:01:41.435 CXX test/cpp_headers/mmio.o 00:01:41.435 CXX test/cpp_headers/nbd.o 00:01:41.435 CXX test/cpp_headers/notify.o 00:01:41.435 CXX test/cpp_headers/nvme.o 00:01:41.435 CXX test/cpp_headers/nvme_intel.o 00:01:41.435 CXX test/cpp_headers/nvme_ocssd.o 00:01:41.435 LINK nvme_compliance 00:01:41.435 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:41.435 CXX test/cpp_headers/nvme_spec.o 00:01:41.435 CXX test/cpp_headers/nvme_zns.o 00:01:41.435 CXX test/cpp_headers/nvmf_cmd.o 00:01:41.435 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:41.697 CXX test/cpp_headers/nvmf.o 00:01:41.697 CXX test/cpp_headers/nvmf_spec.o 00:01:41.697 CXX test/cpp_headers/nvmf_transport.o 00:01:41.697 CXX test/cpp_headers/opal.o 00:01:41.697 CXX test/cpp_headers/opal_spec.o 00:01:41.697 CXX test/cpp_headers/pci_ids.o 00:01:41.697 CXX test/cpp_headers/pipe.o 00:01:41.697 CXX test/cpp_headers/queue.o 00:01:41.697 CXX test/cpp_headers/reduce.o 00:01:41.697 CXX test/cpp_headers/rpc.o 00:01:41.697 CXX test/cpp_headers/scheduler.o 00:01:41.697 CXX test/cpp_headers/scsi.o 00:01:41.697 LINK fdp 00:01:41.697 CXX test/cpp_headers/scsi_spec.o 00:01:41.697 CXX test/cpp_headers/sock.o 00:01:41.697 CXX test/cpp_headers/stdinc.o 00:01:41.697 CXX test/cpp_headers/string.o 00:01:41.697 CXX test/cpp_headers/thread.o 00:01:41.697 CXX test/cpp_headers/trace.o 00:01:41.697 CXX test/cpp_headers/trace_parser.o 00:01:41.697 CXX test/cpp_headers/tree.o 00:01:41.697 CXX test/cpp_headers/ublk.o 00:01:41.697 CXX test/cpp_headers/util.o 00:01:41.697 CXX test/cpp_headers/uuid.o 00:01:41.697 CXX test/cpp_headers/version.o 00:01:41.697 CXX test/cpp_headers/vfio_user_pci.o 00:01:41.697 CXX test/cpp_headers/vfio_user_spec.o 00:01:41.697 CXX test/cpp_headers/vhost.o 00:01:41.697 CXX test/cpp_headers/vmd.o 00:01:41.697 CXX test/cpp_headers/xor.o 00:01:41.697 CXX test/cpp_headers/zipf.o 00:01:42.264 LINK memory_ut 00:01:42.523 LINK cuse 00:01:42.781 LINK iscsi_fuzz 00:01:45.317 LINK esnap 00:01:45.576 00:01:45.576 real 0m47.196s 00:01:45.576 user 9m50.573s 00:01:45.576 sys 2m20.962s 00:01:45.576 23:48:14 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:45.576 23:48:14 make -- common/autotest_common.sh@10 -- $ set +x 00:01:45.576 ************************************ 00:01:45.576 END TEST make 00:01:45.576 ************************************ 00:01:45.576 23:48:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:45.576 23:48:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:45.576 23:48:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:45.576 23:48:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.576 23:48:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:45.576 23:48:14 -- pm/common@44 -- $ pid=313195 00:01:45.576 23:48:14 -- pm/common@50 -- $ kill -TERM 313195 00:01:45.576 23:48:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.576 23:48:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:45.576 23:48:14 -- pm/common@44 -- $ pid=313197 00:01:45.576 23:48:14 -- pm/common@50 -- $ kill -TERM 313197 00:01:45.576 23:48:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.576 23:48:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:45.576 23:48:14 -- pm/common@44 -- $ pid=313199 00:01:45.576 23:48:14 -- pm/common@50 -- $ kill -TERM 313199 00:01:45.576 23:48:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.576 23:48:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:45.576 23:48:14 -- pm/common@44 -- $ pid=313230 00:01:45.576 23:48:14 -- pm/common@50 -- $ sudo -E kill -TERM 313230 00:01:45.576 23:48:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:01:45.576 23:48:14 -- nvmf/common.sh@7 -- # uname -s 00:01:45.576 23:48:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:45.576 23:48:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:45.576 23:48:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:45.577 23:48:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:45.577 23:48:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:45.577 23:48:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:45.577 23:48:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:45.577 23:48:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:45.577 23:48:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:45.577 23:48:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:45.577 23:48:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:45.577 23:48:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:45.577 23:48:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:45.577 23:48:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:45.577 23:48:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:45.577 23:48:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:45.577 23:48:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:45.577 23:48:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:45.577 23:48:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.577 23:48:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.577 23:48:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.577 23:48:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.577 23:48:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.577 23:48:14 -- paths/export.sh@5 -- # export PATH 00:01:45.577 23:48:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.577 23:48:14 -- nvmf/common.sh@47 -- # : 0 00:01:45.577 23:48:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:45.577 23:48:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:45.577 23:48:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:45.577 23:48:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:45.577 23:48:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:45.577 23:48:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:45.577 23:48:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:45.577 23:48:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:45.577 23:48:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:45.577 23:48:14 -- spdk/autotest.sh@32 -- # uname -s 00:01:45.577 23:48:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:45.577 23:48:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:45.577 23:48:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:45.577 23:48:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:45.577 23:48:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:45.577 23:48:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:45.835 23:48:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:45.835 23:48:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:45.835 23:48:14 -- spdk/autotest.sh@48 -- # udevadm_pid=367944 00:01:45.835 23:48:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:45.835 23:48:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:45.835 23:48:14 -- pm/common@17 -- # local monitor 00:01:45.835 23:48:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.835 23:48:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.835 23:48:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.835 23:48:14 -- pm/common@21 -- # date +%s 00:01:45.835 23:48:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.835 23:48:14 -- pm/common@21 -- # date +%s 00:01:45.835 23:48:14 -- pm/common@25 -- # sleep 1 00:01:45.835 23:48:14 -- pm/common@21 -- # date +%s 00:01:45.835 23:48:14 -- pm/common@21 -- # date +%s 00:01:45.835 23:48:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723294 00:01:45.836 23:48:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723294 00:01:45.836 23:48:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723294 00:01:45.836 23:48:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723294 00:01:45.836 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723294_collect-vmstat.pm.log 00:01:45.836 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723294_collect-cpu-load.pm.log 00:01:45.836 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723294_collect-cpu-temp.pm.log 00:01:45.836 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723294_collect-bmc-pm.bmc.pm.log 00:01:46.772 23:48:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:46.772 23:48:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:46.772 23:48:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:46.772 23:48:15 -- common/autotest_common.sh@10 -- # set +x 00:01:46.772 23:48:15 -- spdk/autotest.sh@59 -- # create_test_list 00:01:46.772 23:48:15 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:46.772 23:48:15 -- common/autotest_common.sh@10 -- # set +x 00:01:46.772 23:48:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:01:46.772 23:48:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:46.772 23:48:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:46.772 23:48:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:46.772 23:48:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:46.772 23:48:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:46.772 23:48:15 -- common/autotest_common.sh@1451 -- # uname 00:01:46.772 23:48:15 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:46.772 23:48:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:46.772 23:48:15 -- common/autotest_common.sh@1471 -- # uname 00:01:46.772 23:48:15 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:46.772 23:48:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:46.772 23:48:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:46.772 23:48:15 -- spdk/autotest.sh@72 -- # hash lcov 00:01:46.772 23:48:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:46.772 23:48:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:46.772 --rc lcov_branch_coverage=1 00:01:46.772 --rc lcov_function_coverage=1 00:01:46.772 --rc genhtml_branch_coverage=1 00:01:46.772 --rc genhtml_function_coverage=1 00:01:46.772 --rc genhtml_legend=1 00:01:46.772 --rc geninfo_all_blocks=1 00:01:46.772 ' 00:01:46.772 23:48:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:46.772 --rc lcov_branch_coverage=1 00:01:46.772 --rc lcov_function_coverage=1 00:01:46.772 --rc genhtml_branch_coverage=1 00:01:46.772 --rc genhtml_function_coverage=1 00:01:46.772 --rc genhtml_legend=1 00:01:46.772 --rc geninfo_all_blocks=1 00:01:46.772 ' 00:01:46.772 23:48:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:46.772 --rc lcov_branch_coverage=1 00:01:46.772 --rc lcov_function_coverage=1 00:01:46.772 --rc genhtml_branch_coverage=1 00:01:46.772 --rc genhtml_function_coverage=1 00:01:46.772 --rc genhtml_legend=1 00:01:46.772 --rc geninfo_all_blocks=1 00:01:46.772 --no-external' 00:01:46.772 23:48:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:46.772 --rc lcov_branch_coverage=1 00:01:46.772 --rc lcov_function_coverage=1 00:01:46.772 --rc genhtml_branch_coverage=1 00:01:46.772 --rc genhtml_function_coverage=1 00:01:46.772 --rc genhtml_legend=1 00:01:46.772 --rc geninfo_all_blocks=1 00:01:46.772 --no-external' 00:01:46.772 23:48:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:46.772 lcov: LCOV version 1.14 00:01:46.772 23:48:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:01:59.027 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:59.027 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:59.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:59.964 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:59.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:59.964 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:59.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:59.964 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:18.328 23:48:47 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:18.328 23:48:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:18.328 23:48:47 -- common/autotest_common.sh@10 -- # set +x 00:02:18.328 23:48:47 -- spdk/autotest.sh@91 -- # rm -f 00:02:18.328 23:48:47 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.705 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:19.705 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:19.998 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:19.998 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:19.998 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:19.998 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:19.998 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:19.998 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:19.998 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:19.998 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:19.998 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:19.998 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:19.998 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:19.998 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:19.998 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:19.998 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:19.998 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:19.998 23:48:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:19.998 23:48:49 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:19.998 23:48:49 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:19.998 23:48:49 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:19.998 23:48:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:19.998 23:48:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:19.998 23:48:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:19.998 23:48:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:19.998 23:48:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:19.998 23:48:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:19.998 23:48:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:19.998 23:48:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:19.998 23:48:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:19.998 23:48:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:19.998 23:48:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.264 No valid GPT data, bailing 00:02:20.264 23:48:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.264 23:48:49 -- scripts/common.sh@391 -- # pt= 00:02:20.264 23:48:49 -- scripts/common.sh@392 -- # return 1 00:02:20.264 23:48:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.264 1+0 records in 00:02:20.264 1+0 records out 00:02:20.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0020695 s, 507 MB/s 00:02:20.264 23:48:49 -- spdk/autotest.sh@118 -- # sync 00:02:20.264 23:48:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.264 23:48:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.264 23:48:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:22.167 23:48:51 -- spdk/autotest.sh@124 -- # uname -s 00:02:22.167 23:48:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:22.167 23:48:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.167 23:48:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:22.167 23:48:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:22.167 23:48:51 -- common/autotest_common.sh@10 -- # set +x 00:02:22.167 ************************************ 00:02:22.167 START TEST setup.sh 00:02:22.167 ************************************ 00:02:22.167 23:48:51 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.167 * Looking for test storage... 00:02:22.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:22.167 23:48:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:22.167 23:48:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:22.167 23:48:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:22.167 23:48:51 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:22.167 23:48:51 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:22.167 23:48:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:22.167 ************************************ 00:02:22.167 START TEST acl 00:02:22.167 ************************************ 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:22.167 * Looking for test storage... 00:02:22.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.167 23:48:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:22.167 23:48:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:22.167 23:48:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.167 23:48:51 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.069 23:48:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:24.069 23:48:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:24.069 23:48:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.069 23:48:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:24.069 23:48:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.069 23:48:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:25.445 Hugepages 00:02:25.445 node hugesize free / total 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 00:02:25.445 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:25.445 23:48:54 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:25.445 23:48:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:25.445 23:48:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:25.445 23:48:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.445 ************************************ 00:02:25.445 START TEST denied 00:02:25.445 ************************************ 00:02:25.445 23:48:54 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:25.445 23:48:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:25.445 23:48:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:25.445 23:48:54 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:25.445 23:48:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.445 23:48:54 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:26.822 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.822 23:48:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.355 00:02:29.355 real 0m4.009s 00:02:29.355 user 0m1.248s 00:02:29.355 sys 0m1.934s 00:02:29.355 23:48:58 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:29.355 23:48:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:29.355 ************************************ 00:02:29.355 END TEST denied 00:02:29.355 ************************************ 00:02:29.355 23:48:58 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:29.355 23:48:58 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:29.355 23:48:58 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:29.355 23:48:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:29.355 ************************************ 00:02:29.355 START TEST allowed 00:02:29.355 ************************************ 00:02:29.355 23:48:58 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:29.355 23:48:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:29.356 23:48:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:29.356 23:48:58 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:29.356 23:48:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.356 23:48:58 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:31.883 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:31.883 23:49:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:31.883 23:49:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:31.883 23:49:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:31.883 23:49:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.883 23:49:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.786 00:02:33.786 real 0m4.178s 00:02:33.786 user 0m1.186s 00:02:33.786 sys 0m1.924s 00:02:33.786 23:49:02 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:33.786 23:49:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:33.786 ************************************ 00:02:33.786 END TEST allowed 00:02:33.786 ************************************ 00:02:33.786 00:02:33.786 real 0m11.327s 00:02:33.786 user 0m3.659s 00:02:33.787 sys 0m5.862s 00:02:33.787 23:49:02 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:33.787 23:49:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:33.787 ************************************ 00:02:33.787 END TEST acl 00:02:33.787 ************************************ 00:02:33.787 23:49:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.787 23:49:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:33.787 23:49:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:33.787 23:49:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:33.787 ************************************ 00:02:33.787 START TEST hugepages 00:02:33.787 ************************************ 00:02:33.787 23:49:02 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:33.787 * Looking for test storage... 00:02:33.787 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35173628 kB' 'MemAvailable: 39860320 kB' 'Buffers: 2696 kB' 'Cached: 18498336 kB' 'SwapCached: 0 kB' 'Active: 14486500 kB' 'Inactive: 4470784 kB' 'Active(anon): 13897340 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459428 kB' 'Mapped: 234052 kB' 'Shmem: 13441088 kB' 'KReclaimable: 240740 kB' 'Slab: 634472 kB' 'SReclaimable: 240740 kB' 'SUnreclaim: 393732 kB' 'KernelStack: 13072 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 15026812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199468 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.787 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.788 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:33.789 23:49:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:33.789 23:49:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:33.789 23:49:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:33.789 23:49:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.789 ************************************ 00:02:33.789 START TEST default_setup 00:02:33.789 ************************************ 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.789 23:49:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:35.164 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.164 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.164 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:36.103 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37274036 kB' 'MemAvailable: 41960724 kB' 'Buffers: 2696 kB' 'Cached: 18498436 kB' 'SwapCached: 0 kB' 'Active: 14510800 kB' 'Inactive: 4470784 kB' 'Active(anon): 13921640 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483732 kB' 'Mapped: 234532 kB' 'Shmem: 13441188 kB' 'KReclaimable: 240732 kB' 'Slab: 634436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393704 kB' 'KernelStack: 13056 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15054224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199568 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.103 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.104 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.105 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.368 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37274424 kB' 'MemAvailable: 41961112 kB' 'Buffers: 2696 kB' 'Cached: 18498440 kB' 'SwapCached: 0 kB' 'Active: 14506428 kB' 'Inactive: 4470784 kB' 'Active(anon): 13917268 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479228 kB' 'Mapped: 234468 kB' 'Shmem: 13441192 kB' 'KReclaimable: 240732 kB' 'Slab: 634436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393704 kB' 'KernelStack: 13392 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15050512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.369 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.370 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37274040 kB' 'MemAvailable: 41960728 kB' 'Buffers: 2696 kB' 'Cached: 18498456 kB' 'SwapCached: 0 kB' 'Active: 14506572 kB' 'Inactive: 4470784 kB' 'Active(anon): 13917412 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479444 kB' 'Mapped: 234208 kB' 'Shmem: 13441208 kB' 'KReclaimable: 240732 kB' 'Slab: 634436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393704 kB' 'KernelStack: 13504 kB' 'PageTables: 10440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15048140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.371 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.372 nr_hugepages=1024 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.372 resv_hugepages=0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.372 surplus_hugepages=0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.372 anon_hugepages=0 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.372 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37272836 kB' 'MemAvailable: 41959524 kB' 'Buffers: 2696 kB' 'Cached: 18498480 kB' 'SwapCached: 0 kB' 'Active: 14507228 kB' 'Inactive: 4470784 kB' 'Active(anon): 13918068 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480060 kB' 'Mapped: 234208 kB' 'Shmem: 13441232 kB' 'KReclaimable: 240732 kB' 'Slab: 634436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393704 kB' 'KernelStack: 13488 kB' 'PageTables: 10952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15049532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199836 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.373 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.374 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20959580 kB' 'MemUsed: 11870304 kB' 'SwapCached: 0 kB' 'Active: 8127704 kB' 'Inactive: 187176 kB' 'Active(anon): 7731548 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071908 kB' 'Mapped: 122728 kB' 'AnonPages: 246052 kB' 'Shmem: 7488576 kB' 'KernelStack: 7672 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 346460 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.375 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.376 node0=1024 expecting 1024 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.376 00:02:36.376 real 0m2.617s 00:02:36.376 user 0m0.689s 00:02:36.376 sys 0m0.927s 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:36.376 23:49:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:36.376 ************************************ 00:02:36.376 END TEST default_setup 00:02:36.376 ************************************ 00:02:36.376 23:49:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:36.376 23:49:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:36.376 23:49:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:36.376 23:49:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:36.376 ************************************ 00:02:36.376 START TEST per_node_1G_alloc 00:02:36.376 ************************************ 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.376 23:49:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:37.757 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.757 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.757 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.757 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.757 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.757 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.757 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.757 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.757 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.757 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.757 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.757 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.757 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.757 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.757 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.757 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.757 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37279032 kB' 'MemAvailable: 41965720 kB' 'Buffers: 2696 kB' 'Cached: 18498548 kB' 'SwapCached: 0 kB' 'Active: 14505132 kB' 'Inactive: 4470784 kB' 'Active(anon): 13915972 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477372 kB' 'Mapped: 234104 kB' 'Shmem: 13441300 kB' 'KReclaimable: 240732 kB' 'Slab: 634068 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393336 kB' 'KernelStack: 13040 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15047384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.757 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.758 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37302656 kB' 'MemAvailable: 41989344 kB' 'Buffers: 2696 kB' 'Cached: 18498552 kB' 'SwapCached: 0 kB' 'Active: 14505432 kB' 'Inactive: 4470784 kB' 'Active(anon): 13916272 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477772 kB' 'Mapped: 234104 kB' 'Shmem: 13441304 kB' 'KReclaimable: 240732 kB' 'Slab: 634044 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393312 kB' 'KernelStack: 13056 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15047400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.760 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37303328 kB' 'MemAvailable: 41990016 kB' 'Buffers: 2696 kB' 'Cached: 18498572 kB' 'SwapCached: 0 kB' 'Active: 14505028 kB' 'Inactive: 4470784 kB' 'Active(anon): 13915868 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477740 kB' 'Mapped: 234076 kB' 'Shmem: 13441324 kB' 'KReclaimable: 240732 kB' 'Slab: 634060 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393328 kB' 'KernelStack: 13104 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15047424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.762 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.763 nr_hugepages=1024 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.763 resv_hugepages=0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.763 surplus_hugepages=0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.763 anon_hugepages=0 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37303328 kB' 'MemAvailable: 41990016 kB' 'Buffers: 2696 kB' 'Cached: 18498588 kB' 'SwapCached: 0 kB' 'Active: 14504796 kB' 'Inactive: 4470784 kB' 'Active(anon): 13915636 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477452 kB' 'Mapped: 234076 kB' 'Shmem: 13441340 kB' 'KReclaimable: 240732 kB' 'Slab: 634060 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393328 kB' 'KernelStack: 13088 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15047444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.025 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.026 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22041548 kB' 'MemUsed: 10788336 kB' 'SwapCached: 0 kB' 'Active: 8127056 kB' 'Inactive: 187176 kB' 'Active(anon): 7730900 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071908 kB' 'Mapped: 122680 kB' 'AnonPages: 245432 kB' 'Shmem: 7488576 kB' 'KernelStack: 7544 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 346132 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.027 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15261488 kB' 'MemUsed: 12450356 kB' 'SwapCached: 0 kB' 'Active: 6378272 kB' 'Inactive: 4283608 kB' 'Active(anon): 6185268 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429380 kB' 'Mapped: 111396 kB' 'AnonPages: 232548 kB' 'Shmem: 5952768 kB' 'KernelStack: 5528 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125048 kB' 'Slab: 287928 kB' 'SReclaimable: 125048 kB' 'SUnreclaim: 162880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.030 node0=512 expecting 512 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.030 node1=512 expecting 512 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.030 00:02:38.030 real 0m1.551s 00:02:38.030 user 0m0.646s 00:02:38.030 sys 0m0.870s 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:38.030 23:49:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.030 ************************************ 00:02:38.030 END TEST per_node_1G_alloc 00:02:38.030 ************************************ 00:02:38.030 23:49:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:38.030 23:49:07 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.030 23:49:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.030 23:49:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.030 ************************************ 00:02:38.030 START TEST even_2G_alloc 00:02:38.030 ************************************ 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.030 23:49:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:39.412 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.412 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.412 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.412 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.412 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.412 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.412 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.412 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.412 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.412 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.412 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.412 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.412 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.412 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.412 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.412 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.412 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.412 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37313676 kB' 'MemAvailable: 42000364 kB' 'Buffers: 2696 kB' 'Cached: 18498688 kB' 'SwapCached: 0 kB' 'Active: 14499932 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910772 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472572 kB' 'Mapped: 233236 kB' 'Shmem: 13441440 kB' 'KReclaimable: 240732 kB' 'Slab: 633896 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393164 kB' 'KernelStack: 13056 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15024356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.413 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37318036 kB' 'MemAvailable: 42004724 kB' 'Buffers: 2696 kB' 'Cached: 18498692 kB' 'SwapCached: 0 kB' 'Active: 14499736 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910576 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472384 kB' 'Mapped: 233176 kB' 'Shmem: 13441444 kB' 'KReclaimable: 240732 kB' 'Slab: 633848 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393116 kB' 'KernelStack: 12976 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15024372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199468 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.414 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.415 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37318320 kB' 'MemAvailable: 42005008 kB' 'Buffers: 2696 kB' 'Cached: 18498712 kB' 'SwapCached: 0 kB' 'Active: 14499168 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910008 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471788 kB' 'Mapped: 233232 kB' 'Shmem: 13441464 kB' 'KReclaimable: 240732 kB' 'Slab: 633844 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393112 kB' 'KernelStack: 13008 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15024392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199484 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.416 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.417 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.418 nr_hugepages=1024 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.418 resv_hugepages=0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.418 surplus_hugepages=0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.418 anon_hugepages=0 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37318320 kB' 'MemAvailable: 42005008 kB' 'Buffers: 2696 kB' 'Cached: 18498712 kB' 'SwapCached: 0 kB' 'Active: 14498868 kB' 'Inactive: 4470784 kB' 'Active(anon): 13909708 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471492 kB' 'Mapped: 233232 kB' 'Shmem: 13441464 kB' 'KReclaimable: 240732 kB' 'Slab: 633844 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393112 kB' 'KernelStack: 13008 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15024416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.418 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.419 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22043732 kB' 'MemUsed: 10786152 kB' 'SwapCached: 0 kB' 'Active: 8122636 kB' 'Inactive: 187176 kB' 'Active(anon): 7726480 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071920 kB' 'Mapped: 121924 kB' 'AnonPages: 240972 kB' 'Shmem: 7488588 kB' 'KernelStack: 7400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 345980 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.420 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.421 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15274804 kB' 'MemUsed: 12437040 kB' 'SwapCached: 0 kB' 'Active: 6376524 kB' 'Inactive: 4283608 kB' 'Active(anon): 6183520 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429552 kB' 'Mapped: 111308 kB' 'AnonPages: 230736 kB' 'Shmem: 5952940 kB' 'KernelStack: 5576 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125048 kB' 'Slab: 287864 kB' 'SReclaimable: 125048 kB' 'SUnreclaim: 162816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.422 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:39.423 node0=512 expecting 512 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:39.423 node1=512 expecting 512 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:39.423 00:02:39.423 real 0m1.511s 00:02:39.423 user 0m0.618s 00:02:39.423 sys 0m0.860s 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:39.423 23:49:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:39.423 ************************************ 00:02:39.423 END TEST even_2G_alloc 00:02:39.423 ************************************ 00:02:39.423 23:49:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:39.423 23:49:08 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:39.423 23:49:08 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:39.423 23:49:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.682 ************************************ 00:02:39.682 START TEST odd_alloc 00:02:39.682 ************************************ 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.682 23:49:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:41.063 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.063 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.063 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.063 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.063 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.063 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.063 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.063 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.063 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.063 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.063 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.063 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.063 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.063 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.063 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.063 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.063 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37309780 kB' 'MemAvailable: 41996468 kB' 'Buffers: 2696 kB' 'Cached: 18498816 kB' 'SwapCached: 0 kB' 'Active: 14500804 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911644 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473540 kB' 'Mapped: 233268 kB' 'Shmem: 13441568 kB' 'KReclaimable: 240732 kB' 'Slab: 633720 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392988 kB' 'KernelStack: 12992 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15024612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199468 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.063 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.064 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37310156 kB' 'MemAvailable: 41996844 kB' 'Buffers: 2696 kB' 'Cached: 18498820 kB' 'SwapCached: 0 kB' 'Active: 14501012 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911852 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473772 kB' 'Mapped: 233324 kB' 'Shmem: 13441572 kB' 'KReclaimable: 240732 kB' 'Slab: 633760 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393028 kB' 'KernelStack: 13024 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15024628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199452 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.065 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.066 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37311024 kB' 'MemAvailable: 41997712 kB' 'Buffers: 2696 kB' 'Cached: 18498840 kB' 'SwapCached: 0 kB' 'Active: 14500784 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911624 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473460 kB' 'Mapped: 233248 kB' 'Shmem: 13441592 kB' 'KReclaimable: 240732 kB' 'Slab: 633736 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393004 kB' 'KernelStack: 13008 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15024648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199452 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.067 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:41.068 nr_hugepages=1025 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.068 resv_hugepages=0 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.068 surplus_hugepages=0 00:02:41.068 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.068 anon_hugepages=0 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37311024 kB' 'MemAvailable: 41997712 kB' 'Buffers: 2696 kB' 'Cached: 18498856 kB' 'SwapCached: 0 kB' 'Active: 14500716 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911556 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473296 kB' 'Mapped: 233248 kB' 'Shmem: 13441608 kB' 'KReclaimable: 240732 kB' 'Slab: 633736 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393004 kB' 'KernelStack: 13024 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15024668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199468 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.069 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.070 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22025892 kB' 'MemUsed: 10803992 kB' 'SwapCached: 0 kB' 'Active: 8122740 kB' 'Inactive: 187176 kB' 'Active(anon): 7726584 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071924 kB' 'Mapped: 121940 kB' 'AnonPages: 241136 kB' 'Shmem: 7488592 kB' 'KernelStack: 7416 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 345828 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.071 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15285132 kB' 'MemUsed: 12426712 kB' 'SwapCached: 0 kB' 'Active: 6377556 kB' 'Inactive: 4283608 kB' 'Active(anon): 6184552 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429656 kB' 'Mapped: 111308 kB' 'AnonPages: 231652 kB' 'Shmem: 5953044 kB' 'KernelStack: 5592 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125048 kB' 'Slab: 287908 kB' 'SReclaimable: 125048 kB' 'SUnreclaim: 162860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.072 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.073 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:41.074 node0=512 expecting 513 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:41.074 node1=513 expecting 512 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:41.074 00:02:41.074 real 0m1.569s 00:02:41.074 user 0m0.660s 00:02:41.074 sys 0m0.873s 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:41.074 23:49:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.074 ************************************ 00:02:41.074 END TEST odd_alloc 00:02:41.074 ************************************ 00:02:41.074 23:49:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:41.074 23:49:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.074 23:49:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.074 23:49:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.074 ************************************ 00:02:41.074 START TEST custom_alloc 00:02:41.074 ************************************ 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:41.074 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:41.075 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:41.334 23:49:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:41.334 23:49:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.334 23:49:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:42.716 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.716 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.716 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.716 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.716 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.716 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.716 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.716 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.716 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.716 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.716 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.716 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.716 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.716 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.716 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.716 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.716 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36262396 kB' 'MemAvailable: 40949084 kB' 'Buffers: 2696 kB' 'Cached: 18498956 kB' 'SwapCached: 0 kB' 'Active: 14499688 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910528 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472016 kB' 'Mapped: 233268 kB' 'Shmem: 13441708 kB' 'KReclaimable: 240732 kB' 'Slab: 633436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392704 kB' 'KernelStack: 12992 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15024884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.716 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.717 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36262396 kB' 'MemAvailable: 40949084 kB' 'Buffers: 2696 kB' 'Cached: 18498960 kB' 'SwapCached: 0 kB' 'Active: 14500056 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910896 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472404 kB' 'Mapped: 233268 kB' 'Shmem: 13441712 kB' 'KReclaimable: 240732 kB' 'Slab: 633436 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392704 kB' 'KernelStack: 12992 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15024900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.718 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.719 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.720 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36262264 kB' 'MemAvailable: 40948952 kB' 'Buffers: 2696 kB' 'Cached: 18498976 kB' 'SwapCached: 0 kB' 'Active: 14499616 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910456 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471932 kB' 'Mapped: 233256 kB' 'Shmem: 13441728 kB' 'KReclaimable: 240732 kB' 'Slab: 633472 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392740 kB' 'KernelStack: 13024 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15024920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.721 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.722 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.723 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:42.724 nr_hugepages=1536 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.724 resv_hugepages=0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.724 surplus_hugepages=0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.724 anon_hugepages=0 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36262860 kB' 'MemAvailable: 40949548 kB' 'Buffers: 2696 kB' 'Cached: 18499000 kB' 'SwapCached: 0 kB' 'Active: 14499620 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910460 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471932 kB' 'Mapped: 233256 kB' 'Shmem: 13441752 kB' 'KReclaimable: 240732 kB' 'Slab: 633472 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392740 kB' 'KernelStack: 13024 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15024944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.724 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.725 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22019360 kB' 'MemUsed: 10810524 kB' 'SwapCached: 0 kB' 'Active: 8123000 kB' 'Inactive: 187176 kB' 'Active(anon): 7726844 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071936 kB' 'Mapped: 121948 kB' 'AnonPages: 241348 kB' 'Shmem: 7488604 kB' 'KernelStack: 7432 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 345800 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.726 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.727 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14243964 kB' 'MemUsed: 13467880 kB' 'SwapCached: 0 kB' 'Active: 6376340 kB' 'Inactive: 4283608 kB' 'Active(anon): 6183336 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429780 kB' 'Mapped: 111308 kB' 'AnonPages: 230256 kB' 'Shmem: 5953168 kB' 'KernelStack: 5576 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125048 kB' 'Slab: 287672 kB' 'SReclaimable: 125048 kB' 'SUnreclaim: 162624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.728 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:42.729 node0=512 expecting 512 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:42.729 node1=1024 expecting 1024 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:42.729 00:02:42.729 real 0m1.567s 00:02:42.729 user 0m0.682s 00:02:42.729 sys 0m0.851s 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.729 23:49:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:42.729 ************************************ 00:02:42.729 END TEST custom_alloc 00:02:42.729 ************************************ 00:02:42.729 23:49:11 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:42.729 23:49:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:42.729 23:49:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:42.729 23:49:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.729 ************************************ 00:02:42.729 START TEST no_shrink_alloc 00:02:42.729 ************************************ 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.729 23:49:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:44.134 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.134 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.134 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.134 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.134 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.134 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.134 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.134 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.134 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.134 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.134 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.134 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.134 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.134 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.134 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.134 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.134 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37191412 kB' 'MemAvailable: 41878100 kB' 'Buffers: 2696 kB' 'Cached: 18499080 kB' 'SwapCached: 0 kB' 'Active: 14501296 kB' 'Inactive: 4470784 kB' 'Active(anon): 13912136 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473500 kB' 'Mapped: 234128 kB' 'Shmem: 13441832 kB' 'KReclaimable: 240732 kB' 'Slab: 633612 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392880 kB' 'KernelStack: 13008 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15027280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.134 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.135 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.398 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37192696 kB' 'MemAvailable: 41879384 kB' 'Buffers: 2696 kB' 'Cached: 18499084 kB' 'SwapCached: 0 kB' 'Active: 14504260 kB' 'Inactive: 4470784 kB' 'Active(anon): 13915100 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476448 kB' 'Mapped: 234128 kB' 'Shmem: 13441836 kB' 'KReclaimable: 240732 kB' 'Slab: 633640 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392908 kB' 'KernelStack: 13008 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15029940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.399 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.400 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37189288 kB' 'MemAvailable: 41875976 kB' 'Buffers: 2696 kB' 'Cached: 18499100 kB' 'SwapCached: 0 kB' 'Active: 14505496 kB' 'Inactive: 4470784 kB' 'Active(anon): 13916336 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477664 kB' 'Mapped: 234540 kB' 'Shmem: 13441852 kB' 'KReclaimable: 240732 kB' 'Slab: 633648 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392916 kB' 'KernelStack: 13040 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15031292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199552 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.401 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.402 nr_hugepages=1024 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.402 resv_hugepages=0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.402 surplus_hugepages=0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.402 anon_hugepages=0 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.402 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37189772 kB' 'MemAvailable: 41876460 kB' 'Buffers: 2696 kB' 'Cached: 18499124 kB' 'SwapCached: 0 kB' 'Active: 14499660 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910500 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471868 kB' 'Mapped: 233688 kB' 'Shmem: 13441876 kB' 'KReclaimable: 240732 kB' 'Slab: 633612 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 392880 kB' 'KernelStack: 13024 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15025196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.403 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:44.404 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20975148 kB' 'MemUsed: 11854736 kB' 'SwapCached: 0 kB' 'Active: 8122724 kB' 'Inactive: 187176 kB' 'Active(anon): 7726568 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071940 kB' 'Mapped: 121964 kB' 'AnonPages: 241100 kB' 'Shmem: 7488608 kB' 'KernelStack: 7448 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 345832 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.405 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:44.406 node0=1024 expecting 1024 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.406 23:49:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:45.779 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.779 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:45.779 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.779 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.779 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.779 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.779 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.779 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.779 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.779 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.779 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.779 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.779 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.779 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.779 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.779 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.779 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.779 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37214720 kB' 'MemAvailable: 41901408 kB' 'Buffers: 2696 kB' 'Cached: 18499196 kB' 'SwapCached: 0 kB' 'Active: 14500312 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911152 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472384 kB' 'Mapped: 233312 kB' 'Shmem: 13441948 kB' 'KReclaimable: 240732 kB' 'Slab: 633960 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393228 kB' 'KernelStack: 13040 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15025380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.779 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.780 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37217772 kB' 'MemAvailable: 41904460 kB' 'Buffers: 2696 kB' 'Cached: 18499196 kB' 'SwapCached: 0 kB' 'Active: 14501052 kB' 'Inactive: 4470784 kB' 'Active(anon): 13911892 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473192 kB' 'Mapped: 233388 kB' 'Shmem: 13441948 kB' 'KReclaimable: 240732 kB' 'Slab: 634024 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393292 kB' 'KernelStack: 13040 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15025396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.041 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37218440 kB' 'MemAvailable: 41905128 kB' 'Buffers: 2696 kB' 'Cached: 18499216 kB' 'SwapCached: 0 kB' 'Active: 14500084 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910924 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472100 kB' 'Mapped: 233276 kB' 'Shmem: 13441968 kB' 'KReclaimable: 240732 kB' 'Slab: 634032 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393300 kB' 'KernelStack: 13040 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15025420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.042 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.043 nr_hugepages=1024 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.043 resv_hugepages=0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.043 surplus_hugepages=0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.043 anon_hugepages=0 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37218692 kB' 'MemAvailable: 41905380 kB' 'Buffers: 2696 kB' 'Cached: 18499240 kB' 'SwapCached: 0 kB' 'Active: 14500092 kB' 'Inactive: 4470784 kB' 'Active(anon): 13910932 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472168 kB' 'Mapped: 233276 kB' 'Shmem: 13441992 kB' 'KReclaimable: 240732 kB' 'Slab: 634032 kB' 'SReclaimable: 240732 kB' 'SUnreclaim: 393300 kB' 'KernelStack: 13072 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15025440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2672220 kB' 'DirectMap2M: 19267584 kB' 'DirectMap1G: 47185920 kB' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.043 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20985712 kB' 'MemUsed: 11844172 kB' 'SwapCached: 0 kB' 'Active: 8123644 kB' 'Inactive: 187176 kB' 'Active(anon): 7727488 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071956 kB' 'Mapped: 121968 kB' 'AnonPages: 241992 kB' 'Shmem: 7488624 kB' 'KernelStack: 7464 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115684 kB' 'Slab: 345976 kB' 'SReclaimable: 115684 kB' 'SUnreclaim: 230292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:46.044 node0=1024 expecting 1024 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:46.044 00:02:46.044 real 0m3.214s 00:02:46.044 user 0m1.282s 00:02:46.044 sys 0m1.868s 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:46.044 23:49:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.044 ************************************ 00:02:46.044 END TEST no_shrink_alloc 00:02:46.044 ************************************ 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.044 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.045 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.045 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.045 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:46.045 23:49:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:46.045 00:02:46.045 real 0m12.447s 00:02:46.045 user 0m4.751s 00:02:46.045 sys 0m6.504s 00:02:46.045 23:49:15 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:46.045 23:49:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.045 ************************************ 00:02:46.045 END TEST hugepages 00:02:46.045 ************************************ 00:02:46.045 23:49:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:02:46.045 23:49:15 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:46.045 23:49:15 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:46.045 23:49:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.045 ************************************ 00:02:46.045 START TEST driver 00:02:46.045 ************************************ 00:02:46.045 23:49:15 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:02:46.045 * Looking for test storage... 00:02:46.045 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:46.045 23:49:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:46.045 23:49:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.045 23:49:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.574 23:49:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:48.574 23:49:17 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:48.574 23:49:17 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:48.574 23:49:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:48.574 ************************************ 00:02:48.574 START TEST guess_driver 00:02:48.574 ************************************ 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:48.574 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:48.832 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:48.833 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:48.833 Looking for driver=vfio-pci 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.833 23:49:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.207 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.208 23:49:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.143 23:49:20 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.682 00:02:53.682 real 0m4.931s 00:02:53.682 user 0m1.162s 00:02:53.682 sys 0m1.910s 00:02:53.682 23:49:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.682 23:49:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:53.683 ************************************ 00:02:53.683 END TEST guess_driver 00:02:53.683 ************************************ 00:02:53.683 00:02:53.683 real 0m7.555s 00:02:53.683 user 0m1.818s 00:02:53.683 sys 0m3.020s 00:02:53.683 23:49:22 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.683 23:49:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:53.683 ************************************ 00:02:53.683 END TEST driver 00:02:53.683 ************************************ 00:02:53.683 23:49:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:02:53.683 23:49:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:53.683 23:49:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:53.683 23:49:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:53.683 ************************************ 00:02:53.683 START TEST devices 00:02:53.683 ************************************ 00:02:53.683 23:49:22 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:02:53.683 * Looking for test storage... 00:02:53.683 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:53.683 23:49:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:53.683 23:49:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:53.683 23:49:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.683 23:49:22 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:55.580 23:49:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:55.580 No valid GPT data, bailing 00:02:55.580 23:49:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:55.580 23:49:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:55.580 23:49:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:55.580 23:49:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:55.580 ************************************ 00:02:55.580 START TEST nvme_mount 00:02:55.580 ************************************ 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:55.580 23:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:56.518 Creating new GPT entries in memory. 00:02:56.518 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:56.518 other utilities. 00:02:56.518 23:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:56.518 23:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.518 23:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:56.518 23:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:56.518 23:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:57.453 Creating new GPT entries in memory. 00:02:57.453 The operation has completed successfully. 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 389728 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:57.453 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.454 23:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.838 23:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:58.838 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:58.838 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:59.123 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:59.123 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:59.123 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:59.123 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.123 23:49:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.499 23:49:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.874 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.875 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:02.134 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:02.134 00:03:02.134 real 0m6.721s 00:03:02.134 user 0m1.648s 00:03:02.134 sys 0m2.689s 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:02.134 23:49:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:02.134 ************************************ 00:03:02.134 END TEST nvme_mount 00:03:02.134 ************************************ 00:03:02.134 23:49:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:02.134 23:49:31 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:02.134 23:49:31 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:02.134 23:49:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:02.134 ************************************ 00:03:02.134 START TEST dm_mount 00:03:02.134 ************************************ 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:02.134 23:49:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:03.068 Creating new GPT entries in memory. 00:03:03.068 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:03.068 other utilities. 00:03:03.068 23:49:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:03.068 23:49:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.068 23:49:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:03.068 23:49:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:03.068 23:49:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:04.443 Creating new GPT entries in memory. 00:03:04.443 The operation has completed successfully. 00:03:04.443 23:49:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:04.443 23:49:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:04.444 23:49:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:04.444 23:49:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:04.444 23:49:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:05.380 The operation has completed successfully. 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 392417 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.380 23:49:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.335 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.336 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.594 23:49:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.969 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:07.970 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:07.970 00:03:07.970 real 0m5.939s 00:03:07.970 user 0m1.062s 00:03:07.970 sys 0m1.773s 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.970 23:49:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:07.970 ************************************ 00:03:07.970 END TEST dm_mount 00:03:07.970 ************************************ 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:07.970 23:49:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:08.228 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:08.228 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:08.228 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:08.228 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:08.228 23:49:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:08.486 00:03:08.486 real 0m14.658s 00:03:08.486 user 0m3.403s 00:03:08.486 sys 0m5.532s 00:03:08.486 23:49:37 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:08.486 23:49:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:08.486 ************************************ 00:03:08.486 END TEST devices 00:03:08.486 ************************************ 00:03:08.486 00:03:08.486 real 0m46.248s 00:03:08.486 user 0m13.733s 00:03:08.486 sys 0m21.086s 00:03:08.486 23:49:37 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:08.486 23:49:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:08.486 ************************************ 00:03:08.486 END TEST setup.sh 00:03:08.486 ************************************ 00:03:08.486 23:49:37 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:09.859 Hugepages 00:03:09.859 node hugesize free / total 00:03:09.859 node0 1048576kB 0 / 0 00:03:09.859 node0 2048kB 2048 / 2048 00:03:09.859 node1 1048576kB 0 / 0 00:03:09.859 node1 2048kB 0 / 0 00:03:09.859 00:03:09.859 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.859 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:09.859 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:09.859 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:09.859 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:09.859 23:49:39 -- spdk/autotest.sh@130 -- # uname -s 00:03:09.859 23:49:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:09.859 23:49:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:09.859 23:49:39 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:11.237 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.237 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.237 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:12.170 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.428 23:49:41 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:13.361 23:49:42 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:13.361 23:49:42 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:13.361 23:49:42 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:13.361 23:49:42 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:13.361 23:49:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:13.361 23:49:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:13.361 23:49:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.361 23:49:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:13.361 23:49:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:13.361 23:49:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:13.361 23:49:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:13.361 23:49:42 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.754 Waiting for block devices as requested 00:03:14.754 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:14.754 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:15.012 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:15.012 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:15.012 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:15.012 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:15.270 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:15.270 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:15.270 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:15.270 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:15.527 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:15.527 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:15.527 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:15.527 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:15.785 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:15.785 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:15.785 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:16.043 23:49:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:16.043 23:49:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:03:16.043 23:49:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:16.043 23:49:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:16.043 23:49:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:16.043 23:49:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:16.043 23:49:45 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:16.043 23:49:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:16.043 23:49:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:16.043 23:49:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:16.043 23:49:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:16.043 23:49:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:16.043 23:49:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:16.043 23:49:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:16.043 23:49:45 -- common/autotest_common.sh@1553 -- # continue 00:03:16.043 23:49:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:16.043 23:49:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:16.043 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:03:16.043 23:49:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:16.043 23:49:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:16.043 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:03:16.043 23:49:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:17.449 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.449 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.449 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:18.385 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.385 23:49:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:18.385 23:49:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:18.385 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:18.385 23:49:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:18.385 23:49:47 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:18.385 23:49:47 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:18.385 23:49:47 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:18.385 23:49:47 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:18.385 23:49:47 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:18.385 23:49:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:18.385 23:49:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:18.385 23:49:47 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:18.385 23:49:47 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:18.385 23:49:47 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:18.642 23:49:47 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:18.642 23:49:47 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:18.642 23:49:47 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:18.642 23:49:47 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:18.642 23:49:47 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:18.642 23:49:47 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:18.642 23:49:47 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:18.642 23:49:47 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:03:18.642 23:49:47 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:03:18.642 23:49:47 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=398317 00:03:18.642 23:49:47 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:18.642 23:49:47 -- common/autotest_common.sh@1594 -- # waitforlisten 398317 00:03:18.642 23:49:47 -- common/autotest_common.sh@827 -- # '[' -z 398317 ']' 00:03:18.642 23:49:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.642 23:49:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:18.642 23:49:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.642 23:49:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:18.642 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:18.642 [2024-05-14 23:49:47.788265] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:18.642 [2024-05-14 23:49:47.788360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398317 ] 00:03:18.642 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.642 [2024-05-14 23:49:47.856059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.642 [2024-05-14 23:49:47.964089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.900 23:49:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:18.900 23:49:48 -- common/autotest_common.sh@860 -- # return 0 00:03:18.900 23:49:48 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:18.900 23:49:48 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:18.900 23:49:48 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:22.183 nvme0n1 00:03:22.183 23:49:51 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:22.441 [2024-05-14 23:49:51.561604] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:22.441 [2024-05-14 23:49:51.561647] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:22.441 request: 00:03:22.441 { 00:03:22.441 "nvme_ctrlr_name": "nvme0", 00:03:22.441 "password": "test", 00:03:22.441 "method": "bdev_nvme_opal_revert", 00:03:22.441 "req_id": 1 00:03:22.441 } 00:03:22.441 Got JSON-RPC error response 00:03:22.441 response: 00:03:22.441 { 00:03:22.441 "code": -32603, 00:03:22.441 "message": "Internal error" 00:03:22.441 } 00:03:22.441 23:49:51 -- common/autotest_common.sh@1600 -- # true 00:03:22.441 23:49:51 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:22.441 23:49:51 -- common/autotest_common.sh@1604 -- # killprocess 398317 00:03:22.441 23:49:51 -- common/autotest_common.sh@946 -- # '[' -z 398317 ']' 00:03:22.441 23:49:51 -- common/autotest_common.sh@950 -- # kill -0 398317 00:03:22.441 23:49:51 -- common/autotest_common.sh@951 -- # uname 00:03:22.441 23:49:51 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:22.441 23:49:51 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 398317 00:03:22.441 23:49:51 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:22.441 23:49:51 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:22.441 23:49:51 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 398317' 00:03:22.441 killing process with pid 398317 00:03:22.441 23:49:51 -- common/autotest_common.sh@965 -- # kill 398317 00:03:22.441 23:49:51 -- common/autotest_common.sh@970 -- # wait 398317 00:03:24.340 23:49:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:24.340 23:49:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:24.340 23:49:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:24.340 23:49:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:24.340 23:49:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:24.340 23:49:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:24.340 23:49:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.340 23:49:53 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:24.340 23:49:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.340 23:49:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.340 23:49:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.340 ************************************ 00:03:24.340 START TEST env 00:03:24.340 ************************************ 00:03:24.340 23:49:53 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:24.340 * Looking for test storage... 00:03:24.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:24.340 23:49:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.340 23:49:53 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.340 23:49:53 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.340 23:49:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.340 ************************************ 00:03:24.340 START TEST env_memory 00:03:24.340 ************************************ 00:03:24.340 23:49:53 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.340 00:03:24.340 00:03:24.340 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.340 http://cunit.sourceforge.net/ 00:03:24.340 00:03:24.340 00:03:24.340 Suite: memory 00:03:24.340 Test: alloc and free memory map ...[2024-05-14 23:49:53.553183] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:24.340 passed 00:03:24.340 Test: mem map translation ...[2024-05-14 23:49:53.573179] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:24.340 [2024-05-14 23:49:53.573200] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:24.340 [2024-05-14 23:49:53.573251] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:24.340 [2024-05-14 23:49:53.573263] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:24.340 passed 00:03:24.340 Test: mem map registration ...[2024-05-14 23:49:53.613664] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:24.340 [2024-05-14 23:49:53.613684] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:24.340 passed 00:03:24.340 Test: mem map adjacent registrations ...passed 00:03:24.340 00:03:24.340 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.340 suites 1 1 n/a 0 0 00:03:24.340 tests 4 4 4 0 0 00:03:24.340 asserts 152 152 152 0 n/a 00:03:24.340 00:03:24.340 Elapsed time = 0.141 seconds 00:03:24.340 00:03:24.340 real 0m0.149s 00:03:24.340 user 0m0.136s 00:03:24.340 sys 0m0.012s 00:03:24.340 23:49:53 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.340 23:49:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:24.340 ************************************ 00:03:24.340 END TEST env_memory 00:03:24.340 ************************************ 00:03:24.598 23:49:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:24.598 23:49:53 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.598 23:49:53 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.598 23:49:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.598 ************************************ 00:03:24.598 START TEST env_vtophys 00:03:24.598 ************************************ 00:03:24.598 23:49:53 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:24.598 EAL: lib.eal log level changed from notice to debug 00:03:24.598 EAL: Detected lcore 0 as core 0 on socket 0 00:03:24.598 EAL: Detected lcore 1 as core 1 on socket 0 00:03:24.598 EAL: Detected lcore 2 as core 2 on socket 0 00:03:24.598 EAL: Detected lcore 3 as core 3 on socket 0 00:03:24.598 EAL: Detected lcore 4 as core 4 on socket 0 00:03:24.598 EAL: Detected lcore 5 as core 5 on socket 0 00:03:24.598 EAL: Detected lcore 6 as core 8 on socket 0 00:03:24.598 EAL: Detected lcore 7 as core 9 on socket 0 00:03:24.598 EAL: Detected lcore 8 as core 10 on socket 0 00:03:24.598 EAL: Detected lcore 9 as core 11 on socket 0 00:03:24.599 EAL: Detected lcore 10 as core 12 on socket 0 00:03:24.599 EAL: Detected lcore 11 as core 13 on socket 0 00:03:24.599 EAL: Detected lcore 12 as core 0 on socket 1 00:03:24.599 EAL: Detected lcore 13 as core 1 on socket 1 00:03:24.599 EAL: Detected lcore 14 as core 2 on socket 1 00:03:24.599 EAL: Detected lcore 15 as core 3 on socket 1 00:03:24.599 EAL: Detected lcore 16 as core 4 on socket 1 00:03:24.599 EAL: Detected lcore 17 as core 5 on socket 1 00:03:24.599 EAL: Detected lcore 18 as core 8 on socket 1 00:03:24.599 EAL: Detected lcore 19 as core 9 on socket 1 00:03:24.599 EAL: Detected lcore 20 as core 10 on socket 1 00:03:24.599 EAL: Detected lcore 21 as core 11 on socket 1 00:03:24.599 EAL: Detected lcore 22 as core 12 on socket 1 00:03:24.599 EAL: Detected lcore 23 as core 13 on socket 1 00:03:24.599 EAL: Detected lcore 24 as core 0 on socket 0 00:03:24.599 EAL: Detected lcore 25 as core 1 on socket 0 00:03:24.599 EAL: Detected lcore 26 as core 2 on socket 0 00:03:24.599 EAL: Detected lcore 27 as core 3 on socket 0 00:03:24.599 EAL: Detected lcore 28 as core 4 on socket 0 00:03:24.599 EAL: Detected lcore 29 as core 5 on socket 0 00:03:24.599 EAL: Detected lcore 30 as core 8 on socket 0 00:03:24.599 EAL: Detected lcore 31 as core 9 on socket 0 00:03:24.599 EAL: Detected lcore 32 as core 10 on socket 0 00:03:24.599 EAL: Detected lcore 33 as core 11 on socket 0 00:03:24.599 EAL: Detected lcore 34 as core 12 on socket 0 00:03:24.599 EAL: Detected lcore 35 as core 13 on socket 0 00:03:24.599 EAL: Detected lcore 36 as core 0 on socket 1 00:03:24.599 EAL: Detected lcore 37 as core 1 on socket 1 00:03:24.599 EAL: Detected lcore 38 as core 2 on socket 1 00:03:24.599 EAL: Detected lcore 39 as core 3 on socket 1 00:03:24.599 EAL: Detected lcore 40 as core 4 on socket 1 00:03:24.599 EAL: Detected lcore 41 as core 5 on socket 1 00:03:24.599 EAL: Detected lcore 42 as core 8 on socket 1 00:03:24.599 EAL: Detected lcore 43 as core 9 on socket 1 00:03:24.599 EAL: Detected lcore 44 as core 10 on socket 1 00:03:24.599 EAL: Detected lcore 45 as core 11 on socket 1 00:03:24.599 EAL: Detected lcore 46 as core 12 on socket 1 00:03:24.599 EAL: Detected lcore 47 as core 13 on socket 1 00:03:24.599 EAL: Maximum logical cores by configuration: 128 00:03:24.599 EAL: Detected CPU lcores: 48 00:03:24.599 EAL: Detected NUMA nodes: 2 00:03:24.599 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:24.599 EAL: Detected shared linkage of DPDK 00:03:24.599 EAL: No shared files mode enabled, IPC will be disabled 00:03:24.599 EAL: Bus pci wants IOVA as 'DC' 00:03:24.599 EAL: Buses did not request a specific IOVA mode. 00:03:24.599 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:24.599 EAL: Selected IOVA mode 'VA' 00:03:24.599 EAL: No free 2048 kB hugepages reported on node 1 00:03:24.599 EAL: Probing VFIO support... 00:03:24.599 EAL: IOMMU type 1 (Type 1) is supported 00:03:24.599 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:24.599 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:24.599 EAL: VFIO support initialized 00:03:24.599 EAL: Ask a virtual area of 0x2e000 bytes 00:03:24.599 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:24.599 EAL: Setting up physically contiguous memory... 00:03:24.599 EAL: Setting maximum number of open files to 524288 00:03:24.599 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:24.599 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:24.599 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:24.599 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:24.599 EAL: Ask a virtual area of 0x61000 bytes 00:03:24.599 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:24.599 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:24.599 EAL: Ask a virtual area of 0x400000000 bytes 00:03:24.599 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:24.599 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:24.599 EAL: Hugepages will be freed exactly as allocated. 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: TSC frequency is ~2700000 KHz 00:03:24.599 EAL: Main lcore 0 is ready (tid=7fbab6813a00;cpuset=[0]) 00:03:24.599 EAL: Trying to obtain current memory policy. 00:03:24.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.599 EAL: Restoring previous memory policy: 0 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was expanded by 2MB 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:24.599 EAL: Mem event callback 'spdk:(nil)' registered 00:03:24.599 00:03:24.599 00:03:24.599 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.599 http://cunit.sourceforge.net/ 00:03:24.599 00:03:24.599 00:03:24.599 Suite: components_suite 00:03:24.599 Test: vtophys_malloc_test ...passed 00:03:24.599 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:24.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.599 EAL: Restoring previous memory policy: 4 00:03:24.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was expanded by 4MB 00:03:24.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was shrunk by 4MB 00:03:24.599 EAL: Trying to obtain current memory policy. 00:03:24.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.599 EAL: Restoring previous memory policy: 4 00:03:24.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was expanded by 6MB 00:03:24.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was shrunk by 6MB 00:03:24.599 EAL: Trying to obtain current memory policy. 00:03:24.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.599 EAL: Restoring previous memory policy: 4 00:03:24.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.599 EAL: request: mp_malloc_sync 00:03:24.599 EAL: No shared files mode enabled, IPC is disabled 00:03:24.599 EAL: Heap on socket 0 was expanded by 10MB 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was shrunk by 10MB 00:03:24.600 EAL: Trying to obtain current memory policy. 00:03:24.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.600 EAL: Restoring previous memory policy: 4 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was expanded by 18MB 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was shrunk by 18MB 00:03:24.600 EAL: Trying to obtain current memory policy. 00:03:24.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.600 EAL: Restoring previous memory policy: 4 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was expanded by 34MB 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was shrunk by 34MB 00:03:24.600 EAL: Trying to obtain current memory policy. 00:03:24.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.600 EAL: Restoring previous memory policy: 4 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was expanded by 66MB 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was shrunk by 66MB 00:03:24.600 EAL: Trying to obtain current memory policy. 00:03:24.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.600 EAL: Restoring previous memory policy: 4 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.600 EAL: request: mp_malloc_sync 00:03:24.600 EAL: No shared files mode enabled, IPC is disabled 00:03:24.600 EAL: Heap on socket 0 was expanded by 130MB 00:03:24.600 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.858 EAL: request: mp_malloc_sync 00:03:24.858 EAL: No shared files mode enabled, IPC is disabled 00:03:24.858 EAL: Heap on socket 0 was shrunk by 130MB 00:03:24.858 EAL: Trying to obtain current memory policy. 00:03:24.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.858 EAL: Restoring previous memory policy: 4 00:03:24.858 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.858 EAL: request: mp_malloc_sync 00:03:24.858 EAL: No shared files mode enabled, IPC is disabled 00:03:24.858 EAL: Heap on socket 0 was expanded by 258MB 00:03:24.858 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.859 EAL: request: mp_malloc_sync 00:03:24.859 EAL: No shared files mode enabled, IPC is disabled 00:03:24.859 EAL: Heap on socket 0 was shrunk by 258MB 00:03:24.859 EAL: Trying to obtain current memory policy. 00:03:24.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.116 EAL: Restoring previous memory policy: 4 00:03:25.116 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.116 EAL: request: mp_malloc_sync 00:03:25.116 EAL: No shared files mode enabled, IPC is disabled 00:03:25.116 EAL: Heap on socket 0 was expanded by 514MB 00:03:25.116 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.375 EAL: request: mp_malloc_sync 00:03:25.375 EAL: No shared files mode enabled, IPC is disabled 00:03:25.375 EAL: Heap on socket 0 was shrunk by 514MB 00:03:25.375 EAL: Trying to obtain current memory policy. 00:03:25.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.633 EAL: Restoring previous memory policy: 4 00:03:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.633 EAL: request: mp_malloc_sync 00:03:25.633 EAL: No shared files mode enabled, IPC is disabled 00:03:25.633 EAL: Heap on socket 0 was expanded by 1026MB 00:03:25.892 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.892 EAL: request: mp_malloc_sync 00:03:25.892 EAL: No shared files mode enabled, IPC is disabled 00:03:25.892 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:25.892 passed 00:03:25.892 00:03:25.892 Run Summary: Type Total Ran Passed Failed Inactive 00:03:25.892 suites 1 1 n/a 0 0 00:03:25.892 tests 2 2 2 0 0 00:03:25.892 asserts 497 497 497 0 n/a 00:03:25.892 00:03:25.892 Elapsed time = 1.379 seconds 00:03:25.892 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.892 EAL: request: mp_malloc_sync 00:03:25.892 EAL: No shared files mode enabled, IPC is disabled 00:03:25.892 EAL: Heap on socket 0 was shrunk by 2MB 00:03:25.892 EAL: No shared files mode enabled, IPC is disabled 00:03:25.892 EAL: No shared files mode enabled, IPC is disabled 00:03:25.892 EAL: No shared files mode enabled, IPC is disabled 00:03:25.892 00:03:25.892 real 0m1.505s 00:03:25.892 user 0m0.851s 00:03:25.892 sys 0m0.625s 00:03:25.892 23:49:55 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:25.892 23:49:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:25.892 ************************************ 00:03:25.892 END TEST env_vtophys 00:03:25.892 ************************************ 00:03:26.151 23:49:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.151 23:49:55 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.152 23:49:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.152 23:49:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.152 ************************************ 00:03:26.152 START TEST env_pci 00:03:26.152 ************************************ 00:03:26.152 23:49:55 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.152 00:03:26.152 00:03:26.152 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.152 http://cunit.sourceforge.net/ 00:03:26.152 00:03:26.152 00:03:26.152 Suite: pci 00:03:26.152 Test: pci_hook ...[2024-05-14 23:49:55.293321] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 399209 has claimed it 00:03:26.152 EAL: Cannot find device (10000:00:01.0) 00:03:26.152 EAL: Failed to attach device on primary process 00:03:26.152 passed 00:03:26.152 00:03:26.152 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.152 suites 1 1 n/a 0 0 00:03:26.152 tests 1 1 1 0 0 00:03:26.152 asserts 25 25 25 0 n/a 00:03:26.152 00:03:26.152 Elapsed time = 0.027 seconds 00:03:26.152 00:03:26.152 real 0m0.040s 00:03:26.152 user 0m0.006s 00:03:26.152 sys 0m0.034s 00:03:26.152 23:49:55 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.152 23:49:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:26.152 ************************************ 00:03:26.152 END TEST env_pci 00:03:26.152 ************************************ 00:03:26.152 23:49:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:26.152 23:49:55 env -- env/env.sh@15 -- # uname 00:03:26.152 23:49:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:26.152 23:49:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:26.152 23:49:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.152 23:49:55 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:26.152 23:49:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.152 23:49:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.152 ************************************ 00:03:26.152 START TEST env_dpdk_post_init 00:03:26.152 ************************************ 00:03:26.152 23:49:55 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.152 EAL: Detected CPU lcores: 48 00:03:26.152 EAL: Detected NUMA nodes: 2 00:03:26.152 EAL: Detected shared linkage of DPDK 00:03:26.152 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.152 EAL: Selected IOVA mode 'VA' 00:03:26.152 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.152 EAL: VFIO support initialized 00:03:26.152 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.410 EAL: Using IOMMU type 1 (Type 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:26.410 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:27.347 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:30.629 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:30.629 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:30.629 Starting DPDK initialization... 00:03:30.629 Starting SPDK post initialization... 00:03:30.629 SPDK NVMe probe 00:03:30.629 Attaching to 0000:88:00.0 00:03:30.629 Attached to 0000:88:00.0 00:03:30.629 Cleaning up... 00:03:30.629 00:03:30.629 real 0m4.421s 00:03:30.629 user 0m3.263s 00:03:30.629 sys 0m0.217s 00:03:30.629 23:49:59 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.629 23:49:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:30.629 ************************************ 00:03:30.629 END TEST env_dpdk_post_init 00:03:30.629 ************************************ 00:03:30.629 23:49:59 env -- env/env.sh@26 -- # uname 00:03:30.629 23:49:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:30.629 23:49:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:30.629 23:49:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.629 23:49:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.629 23:49:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.629 ************************************ 00:03:30.629 START TEST env_mem_callbacks 00:03:30.629 ************************************ 00:03:30.629 23:49:59 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:30.629 EAL: Detected CPU lcores: 48 00:03:30.629 EAL: Detected NUMA nodes: 2 00:03:30.629 EAL: Detected shared linkage of DPDK 00:03:30.629 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:30.629 EAL: Selected IOVA mode 'VA' 00:03:30.629 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.629 EAL: VFIO support initialized 00:03:30.629 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:30.629 00:03:30.629 00:03:30.629 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.629 http://cunit.sourceforge.net/ 00:03:30.629 00:03:30.629 00:03:30.629 Suite: memory 00:03:30.629 Test: test ... 00:03:30.629 register 0x200000200000 2097152 00:03:30.629 malloc 3145728 00:03:30.629 register 0x200000400000 4194304 00:03:30.629 buf 0x200000500000 len 3145728 PASSED 00:03:30.629 malloc 64 00:03:30.629 buf 0x2000004fff40 len 64 PASSED 00:03:30.629 malloc 4194304 00:03:30.629 register 0x200000800000 6291456 00:03:30.629 buf 0x200000a00000 len 4194304 PASSED 00:03:30.629 free 0x200000500000 3145728 00:03:30.629 free 0x2000004fff40 64 00:03:30.629 unregister 0x200000400000 4194304 PASSED 00:03:30.629 free 0x200000a00000 4194304 00:03:30.629 unregister 0x200000800000 6291456 PASSED 00:03:30.629 malloc 8388608 00:03:30.629 register 0x200000400000 10485760 00:03:30.629 buf 0x200000600000 len 8388608 PASSED 00:03:30.629 free 0x200000600000 8388608 00:03:30.629 unregister 0x200000400000 10485760 PASSED 00:03:30.629 passed 00:03:30.629 00:03:30.629 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.629 suites 1 1 n/a 0 0 00:03:30.629 tests 1 1 1 0 0 00:03:30.629 asserts 15 15 15 0 n/a 00:03:30.629 00:03:30.629 Elapsed time = 0.005 seconds 00:03:30.629 00:03:30.629 real 0m0.055s 00:03:30.629 user 0m0.012s 00:03:30.629 sys 0m0.042s 00:03:30.629 23:49:59 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.629 23:49:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:30.629 ************************************ 00:03:30.629 END TEST env_mem_callbacks 00:03:30.629 ************************************ 00:03:30.629 00:03:30.629 real 0m6.492s 00:03:30.629 user 0m4.390s 00:03:30.629 sys 0m1.141s 00:03:30.629 23:49:59 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.629 23:49:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.629 ************************************ 00:03:30.629 END TEST env 00:03:30.629 ************************************ 00:03:30.629 23:49:59 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:30.629 23:49:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.629 23:49:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.629 23:49:59 -- common/autotest_common.sh@10 -- # set +x 00:03:30.888 ************************************ 00:03:30.888 START TEST rpc 00:03:30.888 ************************************ 00:03:30.888 23:49:59 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:30.888 * Looking for test storage... 00:03:30.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:30.888 23:50:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=399961 00:03:30.888 23:50:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:30.888 23:50:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.888 23:50:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 399961 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@827 -- # '[' -z 399961 ']' 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:30.888 23:50:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.888 [2024-05-14 23:50:00.075925] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:30.888 [2024-05-14 23:50:00.076021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399961 ] 00:03:30.888 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.888 [2024-05-14 23:50:00.147655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.146 [2024-05-14 23:50:00.266647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:31.146 [2024-05-14 23:50:00.266703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 399961' to capture a snapshot of events at runtime. 00:03:31.146 [2024-05-14 23:50:00.266718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:31.146 [2024-05-14 23:50:00.266739] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:31.146 [2024-05-14 23:50:00.266750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid399961 for offline analysis/debug. 00:03:31.146 [2024-05-14 23:50:00.266808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.405 23:50:00 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:31.405 23:50:00 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:31.405 23:50:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:31.405 23:50:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:31.405 23:50:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:31.405 23:50:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:31.405 23:50:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.405 23:50:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.405 23:50:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 ************************************ 00:03:31.405 START TEST rpc_integrity 00:03:31.405 ************************************ 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:31.405 { 00:03:31.405 "name": "Malloc0", 00:03:31.405 "aliases": [ 00:03:31.405 "3ec715b2-9950-4bd3-8e74-7e57ee4aba4b" 00:03:31.405 ], 00:03:31.405 "product_name": "Malloc disk", 00:03:31.405 "block_size": 512, 00:03:31.405 "num_blocks": 16384, 00:03:31.405 "uuid": "3ec715b2-9950-4bd3-8e74-7e57ee4aba4b", 00:03:31.405 "assigned_rate_limits": { 00:03:31.405 "rw_ios_per_sec": 0, 00:03:31.405 "rw_mbytes_per_sec": 0, 00:03:31.405 "r_mbytes_per_sec": 0, 00:03:31.405 "w_mbytes_per_sec": 0 00:03:31.405 }, 00:03:31.405 "claimed": false, 00:03:31.405 "zoned": false, 00:03:31.405 "supported_io_types": { 00:03:31.405 "read": true, 00:03:31.405 "write": true, 00:03:31.405 "unmap": true, 00:03:31.405 "write_zeroes": true, 00:03:31.405 "flush": true, 00:03:31.405 "reset": true, 00:03:31.405 "compare": false, 00:03:31.405 "compare_and_write": false, 00:03:31.405 "abort": true, 00:03:31.405 "nvme_admin": false, 00:03:31.405 "nvme_io": false 00:03:31.405 }, 00:03:31.405 "memory_domains": [ 00:03:31.405 { 00:03:31.405 "dma_device_id": "system", 00:03:31.405 "dma_device_type": 1 00:03:31.405 }, 00:03:31.405 { 00:03:31.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.405 "dma_device_type": 2 00:03:31.405 } 00:03:31.405 ], 00:03:31.405 "driver_specific": {} 00:03:31.405 } 00:03:31.405 ]' 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 [2024-05-14 23:50:00.673263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:31.405 [2024-05-14 23:50:00.673307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:31.405 [2024-05-14 23:50:00.673330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf216f0 00:03:31.405 [2024-05-14 23:50:00.673345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:31.405 [2024-05-14 23:50:00.674786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:31.405 [2024-05-14 23:50:00.674814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:31.405 Passthru0 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.405 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.405 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:31.405 { 00:03:31.405 "name": "Malloc0", 00:03:31.405 "aliases": [ 00:03:31.405 "3ec715b2-9950-4bd3-8e74-7e57ee4aba4b" 00:03:31.405 ], 00:03:31.405 "product_name": "Malloc disk", 00:03:31.405 "block_size": 512, 00:03:31.405 "num_blocks": 16384, 00:03:31.405 "uuid": "3ec715b2-9950-4bd3-8e74-7e57ee4aba4b", 00:03:31.405 "assigned_rate_limits": { 00:03:31.405 "rw_ios_per_sec": 0, 00:03:31.405 "rw_mbytes_per_sec": 0, 00:03:31.405 "r_mbytes_per_sec": 0, 00:03:31.405 "w_mbytes_per_sec": 0 00:03:31.405 }, 00:03:31.405 "claimed": true, 00:03:31.405 "claim_type": "exclusive_write", 00:03:31.405 "zoned": false, 00:03:31.405 "supported_io_types": { 00:03:31.405 "read": true, 00:03:31.405 "write": true, 00:03:31.405 "unmap": true, 00:03:31.405 "write_zeroes": true, 00:03:31.405 "flush": true, 00:03:31.405 "reset": true, 00:03:31.405 "compare": false, 00:03:31.405 "compare_and_write": false, 00:03:31.405 "abort": true, 00:03:31.405 "nvme_admin": false, 00:03:31.405 "nvme_io": false 00:03:31.405 }, 00:03:31.405 "memory_domains": [ 00:03:31.405 { 00:03:31.406 "dma_device_id": "system", 00:03:31.406 "dma_device_type": 1 00:03:31.406 }, 00:03:31.406 { 00:03:31.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.406 "dma_device_type": 2 00:03:31.406 } 00:03:31.406 ], 00:03:31.406 "driver_specific": {} 00:03:31.406 }, 00:03:31.406 { 00:03:31.406 "name": "Passthru0", 00:03:31.406 "aliases": [ 00:03:31.406 "5f24b7ea-fb85-55d8-8abd-dda186c78bf1" 00:03:31.406 ], 00:03:31.406 "product_name": "passthru", 00:03:31.406 "block_size": 512, 00:03:31.406 "num_blocks": 16384, 00:03:31.406 "uuid": "5f24b7ea-fb85-55d8-8abd-dda186c78bf1", 00:03:31.406 "assigned_rate_limits": { 00:03:31.406 "rw_ios_per_sec": 0, 00:03:31.406 "rw_mbytes_per_sec": 0, 00:03:31.406 "r_mbytes_per_sec": 0, 00:03:31.406 "w_mbytes_per_sec": 0 00:03:31.406 }, 00:03:31.406 "claimed": false, 00:03:31.406 "zoned": false, 00:03:31.406 "supported_io_types": { 00:03:31.406 "read": true, 00:03:31.406 "write": true, 00:03:31.406 "unmap": true, 00:03:31.406 "write_zeroes": true, 00:03:31.406 "flush": true, 00:03:31.406 "reset": true, 00:03:31.406 "compare": false, 00:03:31.406 "compare_and_write": false, 00:03:31.406 "abort": true, 00:03:31.406 "nvme_admin": false, 00:03:31.406 "nvme_io": false 00:03:31.406 }, 00:03:31.406 "memory_domains": [ 00:03:31.406 { 00:03:31.406 "dma_device_id": "system", 00:03:31.406 "dma_device_type": 1 00:03:31.406 }, 00:03:31.406 { 00:03:31.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.406 "dma_device_type": 2 00:03:31.406 } 00:03:31.406 ], 00:03:31.406 "driver_specific": { 00:03:31.406 "passthru": { 00:03:31.406 "name": "Passthru0", 00:03:31.406 "base_bdev_name": "Malloc0" 00:03:31.406 } 00:03:31.406 } 00:03:31.406 } 00:03:31.406 ]' 00:03:31.406 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:31.406 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:31.406 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.406 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.406 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.406 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.664 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.664 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.664 23:50:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.664 00:03:31.664 real 0m0.232s 00:03:31.664 user 0m0.151s 00:03:31.664 sys 0m0.023s 00:03:31.664 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 ************************************ 00:03:31.664 END TEST rpc_integrity 00:03:31.664 ************************************ 00:03:31.664 23:50:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 ************************************ 00:03:31.664 START TEST rpc_plugins 00:03:31.664 ************************************ 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:31.664 { 00:03:31.664 "name": "Malloc1", 00:03:31.664 "aliases": [ 00:03:31.664 "26abb4a9-20dd-4aea-ae89-1dbde4b2d865" 00:03:31.664 ], 00:03:31.664 "product_name": "Malloc disk", 00:03:31.664 "block_size": 4096, 00:03:31.664 "num_blocks": 256, 00:03:31.664 "uuid": "26abb4a9-20dd-4aea-ae89-1dbde4b2d865", 00:03:31.664 "assigned_rate_limits": { 00:03:31.664 "rw_ios_per_sec": 0, 00:03:31.664 "rw_mbytes_per_sec": 0, 00:03:31.664 "r_mbytes_per_sec": 0, 00:03:31.664 "w_mbytes_per_sec": 0 00:03:31.664 }, 00:03:31.664 "claimed": false, 00:03:31.664 "zoned": false, 00:03:31.664 "supported_io_types": { 00:03:31.664 "read": true, 00:03:31.664 "write": true, 00:03:31.664 "unmap": true, 00:03:31.664 "write_zeroes": true, 00:03:31.664 "flush": true, 00:03:31.664 "reset": true, 00:03:31.664 "compare": false, 00:03:31.664 "compare_and_write": false, 00:03:31.664 "abort": true, 00:03:31.664 "nvme_admin": false, 00:03:31.664 "nvme_io": false 00:03:31.664 }, 00:03:31.664 "memory_domains": [ 00:03:31.664 { 00:03:31.664 "dma_device_id": "system", 00:03:31.664 "dma_device_type": 1 00:03:31.664 }, 00:03:31.664 { 00:03:31.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.664 "dma_device_type": 2 00:03:31.664 } 00:03:31.664 ], 00:03:31.664 "driver_specific": {} 00:03:31.664 } 00:03:31.664 ]' 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:31.664 23:50:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:31.664 00:03:31.664 real 0m0.110s 00:03:31.664 user 0m0.066s 00:03:31.664 sys 0m0.014s 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:31.664 23:50:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 ************************************ 00:03:31.664 END TEST rpc_plugins 00:03:31.664 ************************************ 00:03:31.664 23:50:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.664 23:50:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.664 ************************************ 00:03:31.664 START TEST rpc_trace_cmd_test 00:03:31.664 ************************************ 00:03:31.664 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:31.664 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:31.664 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:31.664 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.664 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:31.923 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid399961", 00:03:31.923 "tpoint_group_mask": "0x8", 00:03:31.923 "iscsi_conn": { 00:03:31.923 "mask": "0x2", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "scsi": { 00:03:31.923 "mask": "0x4", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "bdev": { 00:03:31.923 "mask": "0x8", 00:03:31.923 "tpoint_mask": "0xffffffffffffffff" 00:03:31.923 }, 00:03:31.923 "nvmf_rdma": { 00:03:31.923 "mask": "0x10", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "nvmf_tcp": { 00:03:31.923 "mask": "0x20", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "ftl": { 00:03:31.923 "mask": "0x40", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "blobfs": { 00:03:31.923 "mask": "0x80", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "dsa": { 00:03:31.923 "mask": "0x200", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "thread": { 00:03:31.923 "mask": "0x400", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "nvme_pcie": { 00:03:31.923 "mask": "0x800", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "iaa": { 00:03:31.923 "mask": "0x1000", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "nvme_tcp": { 00:03:31.923 "mask": "0x2000", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "bdev_nvme": { 00:03:31.923 "mask": "0x4000", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 }, 00:03:31.923 "sock": { 00:03:31.923 "mask": "0x8000", 00:03:31.923 "tpoint_mask": "0x0" 00:03:31.923 } 00:03:31.923 }' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:31.923 00:03:31.923 real 0m0.200s 00:03:31.923 user 0m0.178s 00:03:31.923 sys 0m0.013s 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:31.923 23:50:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:31.923 ************************************ 00:03:31.923 END TEST rpc_trace_cmd_test 00:03:31.923 ************************************ 00:03:31.923 23:50:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:31.923 23:50:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:31.923 23:50:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:31.923 23:50:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.923 23:50:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.923 23:50:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.923 ************************************ 00:03:31.923 START TEST rpc_daemon_integrity 00:03:31.923 ************************************ 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:31.923 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:32.182 { 00:03:32.182 "name": "Malloc2", 00:03:32.182 "aliases": [ 00:03:32.182 "01816bee-ef54-41c5-8227-fd336388288d" 00:03:32.182 ], 00:03:32.182 "product_name": "Malloc disk", 00:03:32.182 "block_size": 512, 00:03:32.182 "num_blocks": 16384, 00:03:32.182 "uuid": "01816bee-ef54-41c5-8227-fd336388288d", 00:03:32.182 "assigned_rate_limits": { 00:03:32.182 "rw_ios_per_sec": 0, 00:03:32.182 "rw_mbytes_per_sec": 0, 00:03:32.182 "r_mbytes_per_sec": 0, 00:03:32.182 "w_mbytes_per_sec": 0 00:03:32.182 }, 00:03:32.182 "claimed": false, 00:03:32.182 "zoned": false, 00:03:32.182 "supported_io_types": { 00:03:32.182 "read": true, 00:03:32.182 "write": true, 00:03:32.182 "unmap": true, 00:03:32.182 "write_zeroes": true, 00:03:32.182 "flush": true, 00:03:32.182 "reset": true, 00:03:32.182 "compare": false, 00:03:32.182 "compare_and_write": false, 00:03:32.182 "abort": true, 00:03:32.182 "nvme_admin": false, 00:03:32.182 "nvme_io": false 00:03:32.182 }, 00:03:32.182 "memory_domains": [ 00:03:32.182 { 00:03:32.182 "dma_device_id": "system", 00:03:32.182 "dma_device_type": 1 00:03:32.182 }, 00:03:32.182 { 00:03:32.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.182 "dma_device_type": 2 00:03:32.182 } 00:03:32.182 ], 00:03:32.182 "driver_specific": {} 00:03:32.182 } 00:03:32.182 ]' 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 [2024-05-14 23:50:01.367707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:32.182 [2024-05-14 23:50:01.367751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:32.182 [2024-05-14 23:50:01.367779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf21340 00:03:32.182 [2024-05-14 23:50:01.367795] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:32.182 [2024-05-14 23:50:01.369166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:32.182 [2024-05-14 23:50:01.369191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:32.182 Passthru0 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:32.182 { 00:03:32.182 "name": "Malloc2", 00:03:32.182 "aliases": [ 00:03:32.182 "01816bee-ef54-41c5-8227-fd336388288d" 00:03:32.182 ], 00:03:32.182 "product_name": "Malloc disk", 00:03:32.182 "block_size": 512, 00:03:32.182 "num_blocks": 16384, 00:03:32.182 "uuid": "01816bee-ef54-41c5-8227-fd336388288d", 00:03:32.182 "assigned_rate_limits": { 00:03:32.182 "rw_ios_per_sec": 0, 00:03:32.182 "rw_mbytes_per_sec": 0, 00:03:32.182 "r_mbytes_per_sec": 0, 00:03:32.182 "w_mbytes_per_sec": 0 00:03:32.182 }, 00:03:32.182 "claimed": true, 00:03:32.182 "claim_type": "exclusive_write", 00:03:32.182 "zoned": false, 00:03:32.182 "supported_io_types": { 00:03:32.182 "read": true, 00:03:32.182 "write": true, 00:03:32.182 "unmap": true, 00:03:32.182 "write_zeroes": true, 00:03:32.182 "flush": true, 00:03:32.182 "reset": true, 00:03:32.182 "compare": false, 00:03:32.182 "compare_and_write": false, 00:03:32.182 "abort": true, 00:03:32.182 "nvme_admin": false, 00:03:32.182 "nvme_io": false 00:03:32.182 }, 00:03:32.182 "memory_domains": [ 00:03:32.182 { 00:03:32.182 "dma_device_id": "system", 00:03:32.182 "dma_device_type": 1 00:03:32.182 }, 00:03:32.182 { 00:03:32.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.182 "dma_device_type": 2 00:03:32.182 } 00:03:32.182 ], 00:03:32.182 "driver_specific": {} 00:03:32.182 }, 00:03:32.182 { 00:03:32.182 "name": "Passthru0", 00:03:32.182 "aliases": [ 00:03:32.182 "9f7fc3ac-8e20-5453-9a53-68d143d06e55" 00:03:32.182 ], 00:03:32.182 "product_name": "passthru", 00:03:32.182 "block_size": 512, 00:03:32.182 "num_blocks": 16384, 00:03:32.182 "uuid": "9f7fc3ac-8e20-5453-9a53-68d143d06e55", 00:03:32.182 "assigned_rate_limits": { 00:03:32.182 "rw_ios_per_sec": 0, 00:03:32.182 "rw_mbytes_per_sec": 0, 00:03:32.182 "r_mbytes_per_sec": 0, 00:03:32.182 "w_mbytes_per_sec": 0 00:03:32.182 }, 00:03:32.182 "claimed": false, 00:03:32.182 "zoned": false, 00:03:32.182 "supported_io_types": { 00:03:32.182 "read": true, 00:03:32.182 "write": true, 00:03:32.182 "unmap": true, 00:03:32.182 "write_zeroes": true, 00:03:32.182 "flush": true, 00:03:32.182 "reset": true, 00:03:32.182 "compare": false, 00:03:32.182 "compare_and_write": false, 00:03:32.182 "abort": true, 00:03:32.182 "nvme_admin": false, 00:03:32.182 "nvme_io": false 00:03:32.182 }, 00:03:32.182 "memory_domains": [ 00:03:32.182 { 00:03:32.182 "dma_device_id": "system", 00:03:32.182 "dma_device_type": 1 00:03:32.182 }, 00:03:32.182 { 00:03:32.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.182 "dma_device_type": 2 00:03:32.182 } 00:03:32.182 ], 00:03:32.182 "driver_specific": { 00:03:32.182 "passthru": { 00:03:32.182 "name": "Passthru0", 00:03:32.182 "base_bdev_name": "Malloc2" 00:03:32.182 } 00:03:32.182 } 00:03:32.182 } 00:03:32.182 ]' 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.182 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:32.183 00:03:32.183 real 0m0.231s 00:03:32.183 user 0m0.151s 00:03:32.183 sys 0m0.024s 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:32.183 23:50:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.183 ************************************ 00:03:32.183 END TEST rpc_daemon_integrity 00:03:32.183 ************************************ 00:03:32.183 23:50:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:32.183 23:50:01 rpc -- rpc/rpc.sh@84 -- # killprocess 399961 00:03:32.183 23:50:01 rpc -- common/autotest_common.sh@946 -- # '[' -z 399961 ']' 00:03:32.183 23:50:01 rpc -- common/autotest_common.sh@950 -- # kill -0 399961 00:03:32.183 23:50:01 rpc -- common/autotest_common.sh@951 -- # uname 00:03:32.183 23:50:01 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:32.183 23:50:01 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 399961 00:03:32.440 23:50:01 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:32.440 23:50:01 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:32.440 23:50:01 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 399961' 00:03:32.440 killing process with pid 399961 00:03:32.440 23:50:01 rpc -- common/autotest_common.sh@965 -- # kill 399961 00:03:32.440 23:50:01 rpc -- common/autotest_common.sh@970 -- # wait 399961 00:03:32.698 00:03:32.699 real 0m2.007s 00:03:32.699 user 0m2.493s 00:03:32.699 sys 0m0.611s 00:03:32.699 23:50:01 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:32.699 23:50:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.699 ************************************ 00:03:32.699 END TEST rpc 00:03:32.699 ************************************ 00:03:32.699 23:50:02 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:32.699 23:50:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:32.699 23:50:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.699 23:50:02 -- common/autotest_common.sh@10 -- # set +x 00:03:32.699 ************************************ 00:03:32.699 START TEST skip_rpc 00:03:32.699 ************************************ 00:03:32.699 23:50:02 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:32.957 * Looking for test storage... 00:03:32.957 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:32.957 23:50:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:32.957 23:50:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:32.957 23:50:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:32.957 23:50:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:32.957 23:50:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.957 23:50:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.957 ************************************ 00:03:32.957 START TEST skip_rpc 00:03:32.957 ************************************ 00:03:32.957 23:50:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:32.957 23:50:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=400304 00:03:32.957 23:50:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:32.957 23:50:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.957 23:50:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:32.957 [2024-05-14 23:50:02.165707] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:32.957 [2024-05-14 23:50:02.165773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400304 ] 00:03:32.957 EAL: No free 2048 kB hugepages reported on node 1 00:03:32.957 [2024-05-14 23:50:02.239163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.215 [2024-05-14 23:50:02.353731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 400304 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 400304 ']' 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 400304 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 400304 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 400304' 00:03:38.513 killing process with pid 400304 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 400304 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 400304 00:03:38.513 00:03:38.513 real 0m5.490s 00:03:38.513 user 0m5.164s 00:03:38.513 sys 0m0.333s 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.513 23:50:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.513 ************************************ 00:03:38.513 END TEST skip_rpc 00:03:38.513 ************************************ 00:03:38.513 23:50:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:38.513 23:50:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.513 23:50:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.513 23:50:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.513 ************************************ 00:03:38.513 START TEST skip_rpc_with_json 00:03:38.513 ************************************ 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=400999 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 400999 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 400999 ']' 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:38.513 23:50:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:38.513 [2024-05-14 23:50:07.714447] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:38.513 [2024-05-14 23:50:07.714535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400999 ] 00:03:38.513 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.513 [2024-05-14 23:50:07.786940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.770 [2024-05-14 23:50:07.902261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.336 [2024-05-14 23:50:08.667866] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:39.336 request: 00:03:39.336 { 00:03:39.336 "trtype": "tcp", 00:03:39.336 "method": "nvmf_get_transports", 00:03:39.336 "req_id": 1 00:03:39.336 } 00:03:39.336 Got JSON-RPC error response 00:03:39.336 response: 00:03:39.336 { 00:03:39.336 "code": -19, 00:03:39.336 "message": "No such device" 00:03:39.336 } 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.336 [2024-05-14 23:50:08.676000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.336 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.594 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.594 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:39.594 { 00:03:39.594 "subsystems": [ 00:03:39.594 { 00:03:39.594 "subsystem": "keyring", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "iobuf", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "iobuf_set_options", 00:03:39.594 "params": { 00:03:39.594 "small_pool_count": 8192, 00:03:39.594 "large_pool_count": 1024, 00:03:39.594 "small_bufsize": 8192, 00:03:39.594 "large_bufsize": 135168 00:03:39.594 } 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "sock", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "sock_impl_set_options", 00:03:39.594 "params": { 00:03:39.594 "impl_name": "posix", 00:03:39.594 "recv_buf_size": 2097152, 00:03:39.594 "send_buf_size": 2097152, 00:03:39.594 "enable_recv_pipe": true, 00:03:39.594 "enable_quickack": false, 00:03:39.594 "enable_placement_id": 0, 00:03:39.594 "enable_zerocopy_send_server": true, 00:03:39.594 "enable_zerocopy_send_client": false, 00:03:39.594 "zerocopy_threshold": 0, 00:03:39.594 "tls_version": 0, 00:03:39.594 "enable_ktls": false 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "sock_impl_set_options", 00:03:39.594 "params": { 00:03:39.594 "impl_name": "ssl", 00:03:39.594 "recv_buf_size": 4096, 00:03:39.594 "send_buf_size": 4096, 00:03:39.594 "enable_recv_pipe": true, 00:03:39.594 "enable_quickack": false, 00:03:39.594 "enable_placement_id": 0, 00:03:39.594 "enable_zerocopy_send_server": true, 00:03:39.594 "enable_zerocopy_send_client": false, 00:03:39.594 "zerocopy_threshold": 0, 00:03:39.594 "tls_version": 0, 00:03:39.594 "enable_ktls": false 00:03:39.594 } 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "vmd", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "accel", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "accel_set_options", 00:03:39.594 "params": { 00:03:39.594 "small_cache_size": 128, 00:03:39.594 "large_cache_size": 16, 00:03:39.594 "task_count": 2048, 00:03:39.594 "sequence_count": 2048, 00:03:39.594 "buf_count": 2048 00:03:39.594 } 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "bdev", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "bdev_set_options", 00:03:39.594 "params": { 00:03:39.594 "bdev_io_pool_size": 65535, 00:03:39.594 "bdev_io_cache_size": 256, 00:03:39.594 "bdev_auto_examine": true, 00:03:39.594 "iobuf_small_cache_size": 128, 00:03:39.594 "iobuf_large_cache_size": 16 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "bdev_raid_set_options", 00:03:39.594 "params": { 00:03:39.594 "process_window_size_kb": 1024 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "bdev_iscsi_set_options", 00:03:39.594 "params": { 00:03:39.594 "timeout_sec": 30 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "bdev_nvme_set_options", 00:03:39.594 "params": { 00:03:39.594 "action_on_timeout": "none", 00:03:39.594 "timeout_us": 0, 00:03:39.594 "timeout_admin_us": 0, 00:03:39.594 "keep_alive_timeout_ms": 10000, 00:03:39.594 "arbitration_burst": 0, 00:03:39.594 "low_priority_weight": 0, 00:03:39.594 "medium_priority_weight": 0, 00:03:39.594 "high_priority_weight": 0, 00:03:39.594 "nvme_adminq_poll_period_us": 10000, 00:03:39.594 "nvme_ioq_poll_period_us": 0, 00:03:39.594 "io_queue_requests": 0, 00:03:39.594 "delay_cmd_submit": true, 00:03:39.594 "transport_retry_count": 4, 00:03:39.594 "bdev_retry_count": 3, 00:03:39.594 "transport_ack_timeout": 0, 00:03:39.594 "ctrlr_loss_timeout_sec": 0, 00:03:39.594 "reconnect_delay_sec": 0, 00:03:39.594 "fast_io_fail_timeout_sec": 0, 00:03:39.594 "disable_auto_failback": false, 00:03:39.594 "generate_uuids": false, 00:03:39.594 "transport_tos": 0, 00:03:39.594 "nvme_error_stat": false, 00:03:39.594 "rdma_srq_size": 0, 00:03:39.594 "io_path_stat": false, 00:03:39.594 "allow_accel_sequence": false, 00:03:39.594 "rdma_max_cq_size": 0, 00:03:39.594 "rdma_cm_event_timeout_ms": 0, 00:03:39.594 "dhchap_digests": [ 00:03:39.594 "sha256", 00:03:39.594 "sha384", 00:03:39.594 "sha512" 00:03:39.594 ], 00:03:39.594 "dhchap_dhgroups": [ 00:03:39.594 "null", 00:03:39.594 "ffdhe2048", 00:03:39.594 "ffdhe3072", 00:03:39.594 "ffdhe4096", 00:03:39.594 "ffdhe6144", 00:03:39.594 "ffdhe8192" 00:03:39.594 ] 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "bdev_nvme_set_hotplug", 00:03:39.594 "params": { 00:03:39.594 "period_us": 100000, 00:03:39.594 "enable": false 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "bdev_wait_for_examine" 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "scsi", 00:03:39.594 "config": null 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "scheduler", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "framework_set_scheduler", 00:03:39.594 "params": { 00:03:39.594 "name": "static" 00:03:39.594 } 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "vhost_scsi", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "vhost_blk", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "ublk", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "nbd", 00:03:39.594 "config": [] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "nvmf", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "nvmf_set_config", 00:03:39.594 "params": { 00:03:39.594 "discovery_filter": "match_any", 00:03:39.594 "admin_cmd_passthru": { 00:03:39.594 "identify_ctrlr": false 00:03:39.594 } 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "nvmf_set_max_subsystems", 00:03:39.594 "params": { 00:03:39.594 "max_subsystems": 1024 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "nvmf_set_crdt", 00:03:39.594 "params": { 00:03:39.594 "crdt1": 0, 00:03:39.594 "crdt2": 0, 00:03:39.594 "crdt3": 0 00:03:39.594 } 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "method": "nvmf_create_transport", 00:03:39.594 "params": { 00:03:39.594 "trtype": "TCP", 00:03:39.594 "max_queue_depth": 128, 00:03:39.594 "max_io_qpairs_per_ctrlr": 127, 00:03:39.594 "in_capsule_data_size": 4096, 00:03:39.594 "max_io_size": 131072, 00:03:39.594 "io_unit_size": 131072, 00:03:39.594 "max_aq_depth": 128, 00:03:39.594 "num_shared_buffers": 511, 00:03:39.594 "buf_cache_size": 4294967295, 00:03:39.594 "dif_insert_or_strip": false, 00:03:39.594 "zcopy": false, 00:03:39.594 "c2h_success": true, 00:03:39.594 "sock_priority": 0, 00:03:39.594 "abort_timeout_sec": 1, 00:03:39.594 "ack_timeout": 0, 00:03:39.594 "data_wr_pool_size": 0 00:03:39.594 } 00:03:39.594 } 00:03:39.594 ] 00:03:39.594 }, 00:03:39.594 { 00:03:39.594 "subsystem": "iscsi", 00:03:39.594 "config": [ 00:03:39.594 { 00:03:39.594 "method": "iscsi_set_options", 00:03:39.594 "params": { 00:03:39.594 "node_base": "iqn.2016-06.io.spdk", 00:03:39.594 "max_sessions": 128, 00:03:39.594 "max_connections_per_session": 2, 00:03:39.594 "max_queue_depth": 64, 00:03:39.594 "default_time2wait": 2, 00:03:39.594 "default_time2retain": 20, 00:03:39.594 "first_burst_length": 8192, 00:03:39.595 "immediate_data": true, 00:03:39.595 "allow_duplicated_isid": false, 00:03:39.595 "error_recovery_level": 0, 00:03:39.595 "nop_timeout": 60, 00:03:39.595 "nop_in_interval": 30, 00:03:39.595 "disable_chap": false, 00:03:39.595 "require_chap": false, 00:03:39.595 "mutual_chap": false, 00:03:39.595 "chap_group": 0, 00:03:39.595 "max_large_datain_per_connection": 64, 00:03:39.595 "max_r2t_per_connection": 4, 00:03:39.595 "pdu_pool_size": 36864, 00:03:39.595 "immediate_data_pool_size": 16384, 00:03:39.595 "data_out_pool_size": 2048 00:03:39.595 } 00:03:39.595 } 00:03:39.595 ] 00:03:39.595 } 00:03:39.595 ] 00:03:39.595 } 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 400999 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 400999 ']' 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 400999 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 400999 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 400999' 00:03:39.595 killing process with pid 400999 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 400999 00:03:39.595 23:50:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 400999 00:03:40.161 23:50:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=401262 00:03:40.161 23:50:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:40.161 23:50:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 401262 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 401262 ']' 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 401262 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 401262 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:45.422 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 401262' 00:03:45.422 killing process with pid 401262 00:03:45.423 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 401262 00:03:45.423 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 401262 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:45.681 00:03:45.681 real 0m7.130s 00:03:45.681 user 0m6.900s 00:03:45.681 sys 0m0.745s 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.681 ************************************ 00:03:45.681 END TEST skip_rpc_with_json 00:03:45.681 ************************************ 00:03:45.681 23:50:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.681 ************************************ 00:03:45.681 START TEST skip_rpc_with_delay 00:03:45.681 ************************************ 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.681 [2024-05-14 23:50:14.895813] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:45.681 [2024-05-14 23:50:14.895927] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:45.681 00:03:45.681 real 0m0.062s 00:03:45.681 user 0m0.041s 00:03:45.681 sys 0m0.021s 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.681 23:50:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:45.681 ************************************ 00:03:45.681 END TEST skip_rpc_with_delay 00:03:45.681 ************************************ 00:03:45.681 23:50:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:45.681 23:50:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:45.681 23:50:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.681 23:50:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.681 ************************************ 00:03:45.681 START TEST exit_on_failed_rpc_init 00:03:45.681 ************************************ 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=401974 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 401974 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 401974 ']' 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.681 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:45.682 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.682 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:45.682 23:50:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.682 [2024-05-14 23:50:15.010578] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:45.682 [2024-05-14 23:50:15.010667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401974 ] 00:03:45.940 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.940 [2024-05-14 23:50:15.079120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.940 [2024-05-14 23:50:15.186391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:46.198 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:46.198 [2024-05-14 23:50:15.501430] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:46.198 [2024-05-14 23:50:15.501520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401995 ] 00:03:46.198 EAL: No free 2048 kB hugepages reported on node 1 00:03:46.456 [2024-05-14 23:50:15.574988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.456 [2024-05-14 23:50:15.695249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:46.456 [2024-05-14 23:50:15.695385] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:46.456 [2024-05-14 23:50:15.695407] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:46.456 [2024-05-14 23:50:15.695420] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 401974 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 401974 ']' 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 401974 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 401974 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 401974' 00:03:46.714 killing process with pid 401974 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 401974 00:03:46.714 23:50:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 401974 00:03:46.971 00:03:46.971 real 0m1.347s 00:03:46.971 user 0m1.496s 00:03:46.971 sys 0m0.474s 00:03:46.971 23:50:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:46.971 23:50:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.971 ************************************ 00:03:46.971 END TEST exit_on_failed_rpc_init 00:03:46.971 ************************************ 00:03:47.229 23:50:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:47.229 00:03:47.229 real 0m14.299s 00:03:47.229 user 0m13.707s 00:03:47.229 sys 0m1.745s 00:03:47.229 23:50:16 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.229 23:50:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.229 ************************************ 00:03:47.229 END TEST skip_rpc 00:03:47.229 ************************************ 00:03:47.229 23:50:16 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:47.229 23:50:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.229 23:50:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.229 23:50:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.229 ************************************ 00:03:47.229 START TEST rpc_client 00:03:47.229 ************************************ 00:03:47.229 23:50:16 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:47.229 * Looking for test storage... 00:03:47.229 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:03:47.229 23:50:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:47.229 OK 00:03:47.229 23:50:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:47.229 00:03:47.229 real 0m0.069s 00:03:47.229 user 0m0.028s 00:03:47.230 sys 0m0.046s 00:03:47.230 23:50:16 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.230 23:50:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:47.230 ************************************ 00:03:47.230 END TEST rpc_client 00:03:47.230 ************************************ 00:03:47.230 23:50:16 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:47.230 23:50:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.230 23:50:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.230 23:50:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.230 ************************************ 00:03:47.230 START TEST json_config 00:03:47.230 ************************************ 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:47.230 23:50:16 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.230 23:50:16 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.230 23:50:16 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.230 23:50:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.230 23:50:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.230 23:50:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.230 23:50:16 json_config -- paths/export.sh@5 -- # export PATH 00:03:47.230 23:50:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@47 -- # : 0 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:47.230 23:50:16 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:47.230 INFO: JSON configuration test init 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.230 23:50:16 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:47.230 23:50:16 json_config -- json_config/common.sh@9 -- # local app=target 00:03:47.230 23:50:16 json_config -- json_config/common.sh@10 -- # shift 00:03:47.230 23:50:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.230 23:50:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.230 23:50:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.230 23:50:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.230 23:50:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.230 23:50:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=402237 00:03:47.230 23:50:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:47.230 23:50:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.230 Waiting for target to run... 00:03:47.230 23:50:16 json_config -- json_config/common.sh@25 -- # waitforlisten 402237 /var/tmp/spdk_tgt.sock 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@827 -- # '[' -z 402237 ']' 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:47.230 23:50:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.489 [2024-05-14 23:50:16.615116] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:47.489 [2024-05-14 23:50:16.615206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402237 ] 00:03:47.489 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.746 [2024-05-14 23:50:16.966444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.746 [2024-05-14 23:50:17.055313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.310 23:50:17 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:48.310 23:50:17 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:48.310 23:50:17 json_config -- json_config/common.sh@26 -- # echo '' 00:03:48.310 00:03:48.310 23:50:17 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:48.310 23:50:17 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:48.311 23:50:17 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:48.311 23:50:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.311 23:50:17 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:48.311 23:50:17 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:48.311 23:50:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.311 23:50:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.311 23:50:17 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:48.311 23:50:17 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:48.311 23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.591 23:50:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:51.591 23:50:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:51.591 23:50:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.591 23:50:20 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:51.849 23:50:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.849 23:50:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:51.849 23:50:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:51.849 23:50:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.849 23:50:20 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.849 23:50:21 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:03:51.849 23:50:21 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:03:51.849 23:50:21 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:51.849 23:50:21 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:03:51.849 23:50:21 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:03:51.849 23:50:21 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:03:51.849 23:50:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@296 -- # e810=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@297 -- # x722=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@298 -- # mlx=() 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:03:54.373 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:03:54.373 23:50:23 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:03:54.374 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:03:54.374 Found net devices under 0000:09:00.0: mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:03:54.374 Found net devices under 0000:09:00.1: mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@58 -- # uname 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:03:54.374 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:54.374 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:03:54.374 altname enp9s0f0np0 00:03:54.374 inet 192.168.100.8/24 scope global mlx_0_0 00:03:54.374 valid_lft forever preferred_lft forever 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:03:54.374 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:54.374 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:03:54.374 altname enp9s0f1np1 00:03:54.374 inet 192.168.100.9/24 scope global mlx_0_1 00:03:54.374 valid_lft forever preferred_lft forever 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@422 -- # return 0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:03:54.374 192.168.100.9' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:03:54.374 192.168.100.9' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@457 -- # head -n 1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:03:54.374 192.168.100.9' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@458 -- # head -n 1 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:03:54.374 23:50:23 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:03:54.374 23:50:23 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:03:54.374 23:50:23 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.374 23:50:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.631 MallocForNvmf0 00:03:54.631 23:50:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.631 23:50:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.890 MallocForNvmf1 00:03:54.890 23:50:24 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:03:54.890 23:50:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:03:55.148 [2024-05-14 23:50:24.275746] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:03:55.148 [2024-05-14 23:50:24.305674] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a6730/0x21d3780) succeed. 00:03:55.148 [2024-05-14 23:50:24.319918] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a8920/0x2233740) succeed. 00:03:55.148 23:50:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.148 23:50:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.406 23:50:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.407 23:50:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.665 23:50:24 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.665 23:50:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.922 23:50:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:55.922 23:50:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:56.181 [2024-05-14 23:50:25.325131] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:03:56.181 [2024-05-14 23:50:25.325545] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:03:56.181 23:50:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:56.181 23:50:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.181 23:50:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.181 23:50:25 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:56.181 23:50:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.181 23:50:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.181 23:50:25 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:56.181 23:50:25 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.181 23:50:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.439 MallocBdevForConfigChangeCheck 00:03:56.439 23:50:25 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:56.439 23:50:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.439 23:50:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.439 23:50:25 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:56.439 23:50:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.696 23:50:26 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:56.696 INFO: shutting down applications... 00:03:56.696 23:50:26 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:56.696 23:50:26 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:56.697 23:50:26 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:56.697 23:50:26 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:58.650 Calling clear_iscsi_subsystem 00:03:58.650 Calling clear_nvmf_subsystem 00:03:58.650 Calling clear_nbd_subsystem 00:03:58.650 Calling clear_ublk_subsystem 00:03:58.650 Calling clear_vhost_blk_subsystem 00:03:58.650 Calling clear_vhost_scsi_subsystem 00:03:58.650 Calling clear_bdev_subsystem 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:58.650 23:50:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:58.908 23:50:28 json_config -- json_config/json_config.sh@345 -- # break 00:03:58.908 23:50:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:58.908 23:50:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:58.908 23:50:28 json_config -- json_config/common.sh@31 -- # local app=target 00:03:58.908 23:50:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.908 23:50:28 json_config -- json_config/common.sh@35 -- # [[ -n 402237 ]] 00:03:58.908 23:50:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 402237 00:03:58.908 [2024-05-14 23:50:28.015612] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:03:58.908 23:50:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.908 23:50:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.908 23:50:28 json_config -- json_config/common.sh@41 -- # kill -0 402237 00:03:58.908 23:50:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.908 [2024-05-14 23:50:28.150953] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:03:59.479 23:50:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.479 23:50:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.479 23:50:28 json_config -- json_config/common.sh@41 -- # kill -0 402237 00:03:59.479 23:50:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:59.479 23:50:28 json_config -- json_config/common.sh@43 -- # break 00:03:59.479 23:50:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:59.479 23:50:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:59.479 SPDK target shutdown done 00:03:59.479 23:50:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:59.479 INFO: relaunching applications... 00:03:59.479 23:50:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.479 23:50:28 json_config -- json_config/common.sh@9 -- # local app=target 00:03:59.479 23:50:28 json_config -- json_config/common.sh@10 -- # shift 00:03:59.479 23:50:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:59.479 23:50:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:59.479 23:50:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:59.479 23:50:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.479 23:50:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.479 23:50:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=405447 00:03:59.479 23:50:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.479 23:50:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:59.479 Waiting for target to run... 00:03:59.479 23:50:28 json_config -- json_config/common.sh@25 -- # waitforlisten 405447 /var/tmp/spdk_tgt.sock 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@827 -- # '[' -z 405447 ']' 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:59.479 23:50:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.479 [2024-05-14 23:50:28.571132] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:03:59.479 [2024-05-14 23:50:28.571217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405447 ] 00:03:59.479 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.046 [2024-05-14 23:50:29.115459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.046 [2024-05-14 23:50:29.219121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.331 [2024-05-14 23:50:32.288412] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1707820/0x1571c80) succeed. 00:04:03.331 [2024-05-14 23:50:32.302262] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x170c280/0x15f1d00) succeed. 00:04:03.331 [2024-05-14 23:50:32.361319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:03.331 [2024-05-14 23:50:32.361632] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:03.897 23:50:32 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:03.897 23:50:32 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:03.897 23:50:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.897 00:04:03.897 23:50:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:03.897 23:50:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:03.897 INFO: Checking if target configuration is the same... 00:04:03.897 23:50:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.897 23:50:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:03.897 23:50:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.897 + '[' 2 -ne 2 ']' 00:04:03.897 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.897 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:03.897 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:03.897 +++ basename /dev/fd/62 00:04:03.897 ++ mktemp /tmp/62.XXX 00:04:03.897 + tmp_file_1=/tmp/62.0iM 00:04:03.897 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.897 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.897 + tmp_file_2=/tmp/spdk_tgt_config.json.GwQ 00:04:03.897 + ret=0 00:04:03.897 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.155 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.155 + diff -u /tmp/62.0iM /tmp/spdk_tgt_config.json.GwQ 00:04:04.155 + echo 'INFO: JSON config files are the same' 00:04:04.155 INFO: JSON config files are the same 00:04:04.155 + rm /tmp/62.0iM /tmp/spdk_tgt_config.json.GwQ 00:04:04.155 + exit 0 00:04:04.155 23:50:33 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:04.155 23:50:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:04.155 INFO: changing configuration and checking if this can be detected... 00:04:04.155 23:50:33 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:04.155 23:50:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:04.413 23:50:33 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.413 23:50:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:04.413 23:50:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.413 + '[' 2 -ne 2 ']' 00:04:04.413 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:04.413 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:04.413 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:04.413 +++ basename /dev/fd/62 00:04:04.413 ++ mktemp /tmp/62.XXX 00:04:04.413 + tmp_file_1=/tmp/62.o9X 00:04:04.413 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:04.413 + tmp_file_2=/tmp/spdk_tgt_config.json.tLT 00:04:04.413 + ret=0 00:04:04.413 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.979 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.979 + diff -u /tmp/62.o9X /tmp/spdk_tgt_config.json.tLT 00:04:04.979 + ret=1 00:04:04.979 + echo '=== Start of file: /tmp/62.o9X ===' 00:04:04.979 + cat /tmp/62.o9X 00:04:04.979 + echo '=== End of file: /tmp/62.o9X ===' 00:04:04.979 + echo '' 00:04:04.979 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tLT ===' 00:04:04.979 + cat /tmp/spdk_tgt_config.json.tLT 00:04:04.979 + echo '=== End of file: /tmp/spdk_tgt_config.json.tLT ===' 00:04:04.979 + echo '' 00:04:04.979 + rm /tmp/62.o9X /tmp/spdk_tgt_config.json.tLT 00:04:04.979 + exit 1 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:04.979 INFO: configuration change detected. 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 405447 ]] 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.979 23:50:34 json_config -- json_config/json_config.sh@323 -- # killprocess 405447 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@946 -- # '[' -z 405447 ']' 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@950 -- # kill -0 405447 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@951 -- # uname 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 405447 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 405447' 00:04:04.979 killing process with pid 405447 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@965 -- # kill 405447 00:04:04.979 [2024-05-14 23:50:34.156026] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:04.979 23:50:34 json_config -- common/autotest_common.sh@970 -- # wait 405447 00:04:04.979 [2024-05-14 23:50:34.301071] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:04:06.880 23:50:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.880 23:50:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:06.880 23:50:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.880 23:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 23:50:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:06.880 23:50:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:06.880 INFO: Success 00:04:06.880 23:50:35 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@117 -- # sync 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:04:06.880 23:50:35 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:04:06.880 00:04:06.880 real 0m19.433s 00:04:06.880 user 0m22.269s 00:04:06.880 sys 0m3.844s 00:04:06.880 23:50:35 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.880 23:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 ************************************ 00:04:06.880 END TEST json_config 00:04:06.880 ************************************ 00:04:06.880 23:50:35 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.880 23:50:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.880 23:50:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.880 23:50:35 -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 ************************************ 00:04:06.880 START TEST json_config_extra_key 00:04:06.880 ************************************ 00:04:06.880 23:50:35 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:06.880 23:50:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.880 23:50:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.880 23:50:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.880 23:50:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.880 23:50:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.880 23:50:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.880 23:50:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.880 23:50:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:06.880 23:50:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.880 INFO: launching applications... 00:04:06.880 23:50:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=406367 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.880 Waiting for target to run... 00:04:06.880 23:50:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 406367 /var/tmp/spdk_tgt.sock 00:04:06.880 23:50:36 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 406367 ']' 00:04:06.880 23:50:36 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.880 23:50:36 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:06.881 23:50:36 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.881 23:50:36 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:06.881 23:50:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.881 [2024-05-14 23:50:36.094074] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:06.881 [2024-05-14 23:50:36.094161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406367 ] 00:04:06.881 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.448 [2024-05-14 23:50:36.623010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.448 [2024-05-14 23:50:36.730446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.706 23:50:37 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:07.706 23:50:37 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:07.706 00:04:07.706 23:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:07.706 INFO: shutting down applications... 00:04:07.706 23:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 406367 ]] 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 406367 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 406367 00:04:07.706 23:50:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 406367 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:08.273 23:50:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:08.273 SPDK target shutdown done 00:04:08.273 23:50:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:08.273 Success 00:04:08.273 00:04:08.273 real 0m1.546s 00:04:08.273 user 0m1.381s 00:04:08.273 sys 0m0.606s 00:04:08.273 23:50:37 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.273 23:50:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:08.273 ************************************ 00:04:08.273 END TEST json_config_extra_key 00:04:08.273 ************************************ 00:04:08.273 23:50:37 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:08.273 23:50:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.273 23:50:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.273 23:50:37 -- common/autotest_common.sh@10 -- # set +x 00:04:08.273 ************************************ 00:04:08.273 START TEST alias_rpc 00:04:08.273 ************************************ 00:04:08.273 23:50:37 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:08.532 * Looking for test storage... 00:04:08.532 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:08.532 23:50:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:08.532 23:50:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=406679 00:04:08.532 23:50:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.532 23:50:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 406679 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 406679 ']' 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:08.532 23:50:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.532 [2024-05-14 23:50:37.699403] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:08.532 [2024-05-14 23:50:37.699496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406679 ] 00:04:08.532 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.532 [2024-05-14 23:50:37.765063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.532 [2024-05-14 23:50:37.870510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:09.099 23:50:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:09.099 23:50:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 406679 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 406679 ']' 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 406679 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406679 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406679' 00:04:09.099 killing process with pid 406679 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@965 -- # kill 406679 00:04:09.099 23:50:38 alias_rpc -- common/autotest_common.sh@970 -- # wait 406679 00:04:09.665 00:04:09.665 real 0m1.304s 00:04:09.665 user 0m1.389s 00:04:09.665 sys 0m0.411s 00:04:09.665 23:50:38 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.665 23:50:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.665 ************************************ 00:04:09.665 END TEST alias_rpc 00:04:09.665 ************************************ 00:04:09.665 23:50:38 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:09.665 23:50:38 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.665 23:50:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.665 23:50:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.665 23:50:38 -- common/autotest_common.sh@10 -- # set +x 00:04:09.665 ************************************ 00:04:09.665 START TEST spdkcli_tcp 00:04:09.665 ************************************ 00:04:09.665 23:50:38 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.665 * Looking for test storage... 00:04:09.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=406865 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.665 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 406865 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 406865 ']' 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:09.665 23:50:39 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.924 23:50:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:09.924 23:50:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.924 [2024-05-14 23:50:39.060065] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:09.924 [2024-05-14 23:50:39.060157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406865 ] 00:04:09.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.924 [2024-05-14 23:50:39.126855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.924 [2024-05-14 23:50:39.233994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.924 [2024-05-14 23:50:39.234000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.858 23:50:39 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:10.858 23:50:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:10.858 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=407002 00:04:10.858 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.858 23:50:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:11.116 [ 00:04:11.116 "bdev_malloc_delete", 00:04:11.116 "bdev_malloc_create", 00:04:11.116 "bdev_null_resize", 00:04:11.116 "bdev_null_delete", 00:04:11.116 "bdev_null_create", 00:04:11.116 "bdev_nvme_cuse_unregister", 00:04:11.116 "bdev_nvme_cuse_register", 00:04:11.116 "bdev_opal_new_user", 00:04:11.116 "bdev_opal_set_lock_state", 00:04:11.116 "bdev_opal_delete", 00:04:11.116 "bdev_opal_get_info", 00:04:11.116 "bdev_opal_create", 00:04:11.116 "bdev_nvme_opal_revert", 00:04:11.116 "bdev_nvme_opal_init", 00:04:11.116 "bdev_nvme_send_cmd", 00:04:11.116 "bdev_nvme_get_path_iostat", 00:04:11.116 "bdev_nvme_get_mdns_discovery_info", 00:04:11.116 "bdev_nvme_stop_mdns_discovery", 00:04:11.116 "bdev_nvme_start_mdns_discovery", 00:04:11.116 "bdev_nvme_set_multipath_policy", 00:04:11.116 "bdev_nvme_set_preferred_path", 00:04:11.116 "bdev_nvme_get_io_paths", 00:04:11.116 "bdev_nvme_remove_error_injection", 00:04:11.116 "bdev_nvme_add_error_injection", 00:04:11.116 "bdev_nvme_get_discovery_info", 00:04:11.116 "bdev_nvme_stop_discovery", 00:04:11.116 "bdev_nvme_start_discovery", 00:04:11.116 "bdev_nvme_get_controller_health_info", 00:04:11.116 "bdev_nvme_disable_controller", 00:04:11.116 "bdev_nvme_enable_controller", 00:04:11.116 "bdev_nvme_reset_controller", 00:04:11.116 "bdev_nvme_get_transport_statistics", 00:04:11.116 "bdev_nvme_apply_firmware", 00:04:11.116 "bdev_nvme_detach_controller", 00:04:11.116 "bdev_nvme_get_controllers", 00:04:11.116 "bdev_nvme_attach_controller", 00:04:11.116 "bdev_nvme_set_hotplug", 00:04:11.116 "bdev_nvme_set_options", 00:04:11.116 "bdev_passthru_delete", 00:04:11.116 "bdev_passthru_create", 00:04:11.116 "bdev_lvol_check_shallow_copy", 00:04:11.116 "bdev_lvol_start_shallow_copy", 00:04:11.116 "bdev_lvol_grow_lvstore", 00:04:11.116 "bdev_lvol_get_lvols", 00:04:11.116 "bdev_lvol_get_lvstores", 00:04:11.116 "bdev_lvol_delete", 00:04:11.116 "bdev_lvol_set_read_only", 00:04:11.116 "bdev_lvol_resize", 00:04:11.116 "bdev_lvol_decouple_parent", 00:04:11.116 "bdev_lvol_inflate", 00:04:11.116 "bdev_lvol_rename", 00:04:11.116 "bdev_lvol_clone_bdev", 00:04:11.116 "bdev_lvol_clone", 00:04:11.116 "bdev_lvol_snapshot", 00:04:11.116 "bdev_lvol_create", 00:04:11.116 "bdev_lvol_delete_lvstore", 00:04:11.116 "bdev_lvol_rename_lvstore", 00:04:11.116 "bdev_lvol_create_lvstore", 00:04:11.116 "bdev_raid_set_options", 00:04:11.116 "bdev_raid_remove_base_bdev", 00:04:11.116 "bdev_raid_add_base_bdev", 00:04:11.116 "bdev_raid_delete", 00:04:11.116 "bdev_raid_create", 00:04:11.116 "bdev_raid_get_bdevs", 00:04:11.116 "bdev_error_inject_error", 00:04:11.116 "bdev_error_delete", 00:04:11.116 "bdev_error_create", 00:04:11.116 "bdev_split_delete", 00:04:11.116 "bdev_split_create", 00:04:11.116 "bdev_delay_delete", 00:04:11.116 "bdev_delay_create", 00:04:11.116 "bdev_delay_update_latency", 00:04:11.116 "bdev_zone_block_delete", 00:04:11.116 "bdev_zone_block_create", 00:04:11.116 "blobfs_create", 00:04:11.116 "blobfs_detect", 00:04:11.116 "blobfs_set_cache_size", 00:04:11.116 "bdev_aio_delete", 00:04:11.116 "bdev_aio_rescan", 00:04:11.116 "bdev_aio_create", 00:04:11.116 "bdev_ftl_set_property", 00:04:11.116 "bdev_ftl_get_properties", 00:04:11.116 "bdev_ftl_get_stats", 00:04:11.116 "bdev_ftl_unmap", 00:04:11.116 "bdev_ftl_unload", 00:04:11.116 "bdev_ftl_delete", 00:04:11.116 "bdev_ftl_load", 00:04:11.116 "bdev_ftl_create", 00:04:11.116 "bdev_virtio_attach_controller", 00:04:11.117 "bdev_virtio_scsi_get_devices", 00:04:11.117 "bdev_virtio_detach_controller", 00:04:11.117 "bdev_virtio_blk_set_hotplug", 00:04:11.117 "bdev_iscsi_delete", 00:04:11.117 "bdev_iscsi_create", 00:04:11.117 "bdev_iscsi_set_options", 00:04:11.117 "accel_error_inject_error", 00:04:11.117 "ioat_scan_accel_module", 00:04:11.117 "dsa_scan_accel_module", 00:04:11.117 "iaa_scan_accel_module", 00:04:11.117 "keyring_file_remove_key", 00:04:11.117 "keyring_file_add_key", 00:04:11.117 "iscsi_get_histogram", 00:04:11.117 "iscsi_enable_histogram", 00:04:11.117 "iscsi_set_options", 00:04:11.117 "iscsi_get_auth_groups", 00:04:11.117 "iscsi_auth_group_remove_secret", 00:04:11.117 "iscsi_auth_group_add_secret", 00:04:11.117 "iscsi_delete_auth_group", 00:04:11.117 "iscsi_create_auth_group", 00:04:11.117 "iscsi_set_discovery_auth", 00:04:11.117 "iscsi_get_options", 00:04:11.117 "iscsi_target_node_request_logout", 00:04:11.117 "iscsi_target_node_set_redirect", 00:04:11.117 "iscsi_target_node_set_auth", 00:04:11.117 "iscsi_target_node_add_lun", 00:04:11.117 "iscsi_get_stats", 00:04:11.117 "iscsi_get_connections", 00:04:11.117 "iscsi_portal_group_set_auth", 00:04:11.117 "iscsi_start_portal_group", 00:04:11.117 "iscsi_delete_portal_group", 00:04:11.117 "iscsi_create_portal_group", 00:04:11.117 "iscsi_get_portal_groups", 00:04:11.117 "iscsi_delete_target_node", 00:04:11.117 "iscsi_target_node_remove_pg_ig_maps", 00:04:11.117 "iscsi_target_node_add_pg_ig_maps", 00:04:11.117 "iscsi_create_target_node", 00:04:11.117 "iscsi_get_target_nodes", 00:04:11.117 "iscsi_delete_initiator_group", 00:04:11.117 "iscsi_initiator_group_remove_initiators", 00:04:11.117 "iscsi_initiator_group_add_initiators", 00:04:11.117 "iscsi_create_initiator_group", 00:04:11.117 "iscsi_get_initiator_groups", 00:04:11.117 "nvmf_set_crdt", 00:04:11.117 "nvmf_set_config", 00:04:11.117 "nvmf_set_max_subsystems", 00:04:11.117 "nvmf_stop_mdns_prr", 00:04:11.117 "nvmf_publish_mdns_prr", 00:04:11.117 "nvmf_subsystem_get_listeners", 00:04:11.117 "nvmf_subsystem_get_qpairs", 00:04:11.117 "nvmf_subsystem_get_controllers", 00:04:11.117 "nvmf_get_stats", 00:04:11.117 "nvmf_get_transports", 00:04:11.117 "nvmf_create_transport", 00:04:11.117 "nvmf_get_targets", 00:04:11.117 "nvmf_delete_target", 00:04:11.117 "nvmf_create_target", 00:04:11.117 "nvmf_subsystem_allow_any_host", 00:04:11.117 "nvmf_subsystem_remove_host", 00:04:11.117 "nvmf_subsystem_add_host", 00:04:11.117 "nvmf_ns_remove_host", 00:04:11.117 "nvmf_ns_add_host", 00:04:11.117 "nvmf_subsystem_remove_ns", 00:04:11.117 "nvmf_subsystem_add_ns", 00:04:11.117 "nvmf_subsystem_listener_set_ana_state", 00:04:11.117 "nvmf_discovery_get_referrals", 00:04:11.117 "nvmf_discovery_remove_referral", 00:04:11.117 "nvmf_discovery_add_referral", 00:04:11.117 "nvmf_subsystem_remove_listener", 00:04:11.117 "nvmf_subsystem_add_listener", 00:04:11.117 "nvmf_delete_subsystem", 00:04:11.117 "nvmf_create_subsystem", 00:04:11.117 "nvmf_get_subsystems", 00:04:11.117 "env_dpdk_get_mem_stats", 00:04:11.117 "nbd_get_disks", 00:04:11.117 "nbd_stop_disk", 00:04:11.117 "nbd_start_disk", 00:04:11.117 "ublk_recover_disk", 00:04:11.117 "ublk_get_disks", 00:04:11.117 "ublk_stop_disk", 00:04:11.117 "ublk_start_disk", 00:04:11.117 "ublk_destroy_target", 00:04:11.117 "ublk_create_target", 00:04:11.117 "virtio_blk_create_transport", 00:04:11.117 "virtio_blk_get_transports", 00:04:11.117 "vhost_controller_set_coalescing", 00:04:11.117 "vhost_get_controllers", 00:04:11.117 "vhost_delete_controller", 00:04:11.117 "vhost_create_blk_controller", 00:04:11.117 "vhost_scsi_controller_remove_target", 00:04:11.117 "vhost_scsi_controller_add_target", 00:04:11.117 "vhost_start_scsi_controller", 00:04:11.117 "vhost_create_scsi_controller", 00:04:11.117 "thread_set_cpumask", 00:04:11.117 "framework_get_scheduler", 00:04:11.117 "framework_set_scheduler", 00:04:11.117 "framework_get_reactors", 00:04:11.117 "thread_get_io_channels", 00:04:11.117 "thread_get_pollers", 00:04:11.117 "thread_get_stats", 00:04:11.117 "framework_monitor_context_switch", 00:04:11.117 "spdk_kill_instance", 00:04:11.117 "log_enable_timestamps", 00:04:11.117 "log_get_flags", 00:04:11.117 "log_clear_flag", 00:04:11.117 "log_set_flag", 00:04:11.117 "log_get_level", 00:04:11.117 "log_set_level", 00:04:11.117 "log_get_print_level", 00:04:11.117 "log_set_print_level", 00:04:11.117 "framework_enable_cpumask_locks", 00:04:11.117 "framework_disable_cpumask_locks", 00:04:11.117 "framework_wait_init", 00:04:11.117 "framework_start_init", 00:04:11.117 "scsi_get_devices", 00:04:11.117 "bdev_get_histogram", 00:04:11.117 "bdev_enable_histogram", 00:04:11.117 "bdev_set_qos_limit", 00:04:11.117 "bdev_set_qd_sampling_period", 00:04:11.117 "bdev_get_bdevs", 00:04:11.117 "bdev_reset_iostat", 00:04:11.117 "bdev_get_iostat", 00:04:11.117 "bdev_examine", 00:04:11.117 "bdev_wait_for_examine", 00:04:11.117 "bdev_set_options", 00:04:11.117 "notify_get_notifications", 00:04:11.117 "notify_get_types", 00:04:11.117 "accel_get_stats", 00:04:11.117 "accel_set_options", 00:04:11.117 "accel_set_driver", 00:04:11.117 "accel_crypto_key_destroy", 00:04:11.117 "accel_crypto_keys_get", 00:04:11.117 "accel_crypto_key_create", 00:04:11.117 "accel_assign_opc", 00:04:11.117 "accel_get_module_info", 00:04:11.117 "accel_get_opc_assignments", 00:04:11.117 "vmd_rescan", 00:04:11.117 "vmd_remove_device", 00:04:11.117 "vmd_enable", 00:04:11.117 "sock_get_default_impl", 00:04:11.117 "sock_set_default_impl", 00:04:11.117 "sock_impl_set_options", 00:04:11.117 "sock_impl_get_options", 00:04:11.117 "iobuf_get_stats", 00:04:11.117 "iobuf_set_options", 00:04:11.117 "framework_get_pci_devices", 00:04:11.117 "framework_get_config", 00:04:11.117 "framework_get_subsystems", 00:04:11.117 "trace_get_info", 00:04:11.117 "trace_get_tpoint_group_mask", 00:04:11.117 "trace_disable_tpoint_group", 00:04:11.117 "trace_enable_tpoint_group", 00:04:11.117 "trace_clear_tpoint_mask", 00:04:11.117 "trace_set_tpoint_mask", 00:04:11.117 "keyring_get_keys", 00:04:11.117 "spdk_get_version", 00:04:11.117 "rpc_get_methods" 00:04:11.117 ] 00:04:11.117 23:50:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.117 23:50:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:11.117 23:50:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 406865 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 406865 ']' 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 406865 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406865 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406865' 00:04:11.117 killing process with pid 406865 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 406865 00:04:11.117 23:50:40 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 406865 00:04:11.683 00:04:11.683 real 0m1.807s 00:04:11.683 user 0m3.463s 00:04:11.683 sys 0m0.494s 00:04:11.683 23:50:40 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.683 23:50:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 ************************************ 00:04:11.683 END TEST spdkcli_tcp 00:04:11.683 ************************************ 00:04:11.683 23:50:40 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.683 23:50:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.683 23:50:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.683 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 ************************************ 00:04:11.683 START TEST dpdk_mem_utility 00:04:11.683 ************************************ 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.683 * Looking for test storage... 00:04:11.683 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:11.683 23:50:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:11.683 23:50:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=407200 00:04:11.683 23:50:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.683 23:50:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 407200 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 407200 ']' 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:11.683 23:50:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 [2024-05-14 23:50:40.917780] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:11.683 [2024-05-14 23:50:40.917872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407200 ] 00:04:11.683 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.683 [2024-05-14 23:50:40.983633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.942 [2024-05-14 23:50:41.090022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.201 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:12.201 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:12.201 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:12.201 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:12.201 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.201 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.201 { 00:04:12.201 "filename": "/tmp/spdk_mem_dump.txt" 00:04:12.201 } 00:04:12.201 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.201 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:12.201 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:12.201 1 heaps totaling size 814.000000 MiB 00:04:12.201 size: 814.000000 MiB heap id: 0 00:04:12.201 end heaps---------- 00:04:12.201 8 mempools totaling size 598.116089 MiB 00:04:12.201 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:12.201 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:12.201 size: 84.521057 MiB name: bdev_io_407200 00:04:12.201 size: 51.011292 MiB name: evtpool_407200 00:04:12.201 size: 50.003479 MiB name: msgpool_407200 00:04:12.201 size: 21.763794 MiB name: PDU_Pool 00:04:12.201 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:12.201 size: 0.026123 MiB name: Session_Pool 00:04:12.201 end mempools------- 00:04:12.201 6 memzones totaling size 4.142822 MiB 00:04:12.201 size: 1.000366 MiB name: RG_ring_0_407200 00:04:12.201 size: 1.000366 MiB name: RG_ring_1_407200 00:04:12.201 size: 1.000366 MiB name: RG_ring_4_407200 00:04:12.201 size: 1.000366 MiB name: RG_ring_5_407200 00:04:12.201 size: 0.125366 MiB name: RG_ring_2_407200 00:04:12.201 size: 0.015991 MiB name: RG_ring_3_407200 00:04:12.201 end memzones------- 00:04:12.201 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:12.201 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:12.201 list of free elements. size: 12.519348 MiB 00:04:12.201 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:12.201 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:12.201 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:12.201 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:12.201 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:12.201 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:12.201 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:12.201 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:12.201 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:12.201 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:12.201 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:12.201 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:12.201 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:12.201 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:12.201 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:12.201 list of standard malloc elements. size: 199.218079 MiB 00:04:12.201 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:12.201 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:12.201 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:12.201 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:12.201 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:12.201 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:12.201 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:12.201 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:12.201 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:12.201 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:12.201 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:12.201 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:12.201 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:12.202 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:12.202 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:12.202 list of memzone associated elements. size: 602.262573 MiB 00:04:12.202 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:12.202 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:12.202 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:12.202 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:12.202 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:12.202 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_407200_0 00:04:12.202 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:12.202 associated memzone info: size: 48.002930 MiB name: MP_evtpool_407200_0 00:04:12.202 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:12.202 associated memzone info: size: 48.002930 MiB name: MP_msgpool_407200_0 00:04:12.202 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:12.202 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:12.202 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:12.202 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:12.202 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:12.202 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_407200 00:04:12.202 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:12.202 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_407200 00:04:12.202 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:12.202 associated memzone info: size: 1.007996 MiB name: MP_evtpool_407200 00:04:12.202 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:12.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:12.202 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:12.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:12.202 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:12.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:12.202 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:12.202 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:12.202 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:12.202 associated memzone info: size: 1.000366 MiB name: RG_ring_0_407200 00:04:12.202 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:12.202 associated memzone info: size: 1.000366 MiB name: RG_ring_1_407200 00:04:12.202 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:12.202 associated memzone info: size: 1.000366 MiB name: RG_ring_4_407200 00:04:12.202 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:12.202 associated memzone info: size: 1.000366 MiB name: RG_ring_5_407200 00:04:12.202 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:12.202 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_407200 00:04:12.202 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:12.202 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:12.202 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:12.202 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:12.202 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:12.202 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:12.202 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:12.202 associated memzone info: size: 0.125366 MiB name: RG_ring_2_407200 00:04:12.202 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:12.202 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:12.202 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:12.202 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:12.202 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:12.202 associated memzone info: size: 0.015991 MiB name: RG_ring_3_407200 00:04:12.202 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:12.202 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:12.202 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:12.202 associated memzone info: size: 0.000183 MiB name: MP_msgpool_407200 00:04:12.202 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:12.202 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_407200 00:04:12.202 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:12.202 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:12.202 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:12.202 23:50:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 407200 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 407200 ']' 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 407200 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 407200 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 407200' 00:04:12.202 killing process with pid 407200 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 407200 00:04:12.202 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 407200 00:04:12.769 00:04:12.769 real 0m1.117s 00:04:12.769 user 0m1.054s 00:04:12.769 sys 0m0.429s 00:04:12.769 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.769 23:50:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.769 ************************************ 00:04:12.769 END TEST dpdk_mem_utility 00:04:12.769 ************************************ 00:04:12.769 23:50:41 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:12.769 23:50:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.769 23:50:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.769 23:50:41 -- common/autotest_common.sh@10 -- # set +x 00:04:12.769 ************************************ 00:04:12.769 START TEST event 00:04:12.769 ************************************ 00:04:12.769 23:50:41 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:12.769 * Looking for test storage... 00:04:12.769 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:12.769 23:50:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:12.769 23:50:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:12.769 23:50:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.769 23:50:42 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:12.769 23:50:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.769 23:50:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.769 ************************************ 00:04:12.769 START TEST event_perf 00:04:12.769 ************************************ 00:04:12.769 23:50:42 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.769 Running I/O for 1 seconds...[2024-05-14 23:50:42.087009] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:12.769 [2024-05-14 23:50:42.087074] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407388 ] 00:04:13.027 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.027 [2024-05-14 23:50:42.157884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:13.027 [2024-05-14 23:50:42.272461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.027 [2024-05-14 23:50:42.272518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.027 [2024-05-14 23:50:42.272636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.027 [2024-05-14 23:50:42.272639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.406 Running I/O for 1 seconds... 00:04:14.406 lcore 0: 228815 00:04:14.406 lcore 1: 228817 00:04:14.406 lcore 2: 228816 00:04:14.406 lcore 3: 228815 00:04:14.406 done. 00:04:14.406 00:04:14.406 real 0m1.328s 00:04:14.406 user 0m4.230s 00:04:14.406 sys 0m0.094s 00:04:14.406 23:50:43 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.406 23:50:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.406 ************************************ 00:04:14.406 END TEST event_perf 00:04:14.406 ************************************ 00:04:14.406 23:50:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.406 23:50:43 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:14.406 23:50:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.406 23:50:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.406 ************************************ 00:04:14.406 START TEST event_reactor 00:04:14.406 ************************************ 00:04:14.406 23:50:43 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.406 [2024-05-14 23:50:43.465814] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:14.406 [2024-05-14 23:50:43.465868] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407543 ] 00:04:14.406 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.406 [2024-05-14 23:50:43.538375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.406 [2024-05-14 23:50:43.655053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.781 test_start 00:04:15.781 oneshot 00:04:15.781 tick 100 00:04:15.781 tick 100 00:04:15.781 tick 250 00:04:15.781 tick 100 00:04:15.781 tick 100 00:04:15.781 tick 250 00:04:15.781 tick 100 00:04:15.781 tick 500 00:04:15.781 tick 100 00:04:15.781 tick 100 00:04:15.781 tick 250 00:04:15.781 tick 100 00:04:15.781 tick 100 00:04:15.781 test_end 00:04:15.781 00:04:15.781 real 0m1.323s 00:04:15.781 user 0m1.232s 00:04:15.781 sys 0m0.086s 00:04:15.781 23:50:44 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.781 23:50:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:15.781 ************************************ 00:04:15.781 END TEST event_reactor 00:04:15.781 ************************************ 00:04:15.781 23:50:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.781 23:50:44 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:15.781 23:50:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.781 23:50:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.781 ************************************ 00:04:15.781 START TEST event_reactor_perf 00:04:15.781 ************************************ 00:04:15.781 23:50:44 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.781 [2024-05-14 23:50:44.842355] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:15.781 [2024-05-14 23:50:44.842426] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407704 ] 00:04:15.781 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.781 [2024-05-14 23:50:44.917417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.781 [2024-05-14 23:50:45.033244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.156 test_start 00:04:17.156 test_end 00:04:17.156 Performance: 350967 events per second 00:04:17.156 00:04:17.156 real 0m1.330s 00:04:17.156 user 0m1.226s 00:04:17.156 sys 0m0.099s 00:04:17.156 23:50:46 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.156 23:50:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:17.156 ************************************ 00:04:17.156 END TEST event_reactor_perf 00:04:17.156 ************************************ 00:04:17.156 23:50:46 event -- event/event.sh@49 -- # uname -s 00:04:17.156 23:50:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:17.156 23:50:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:17.156 23:50:46 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.156 23:50:46 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.156 23:50:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.156 ************************************ 00:04:17.156 START TEST event_scheduler 00:04:17.156 ************************************ 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:17.156 * Looking for test storage... 00:04:17.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:17.156 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:17.156 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=407973 00:04:17.156 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:17.156 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.156 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 407973 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 407973 ']' 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:17.156 23:50:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.156 [2024-05-14 23:50:46.305951] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:17.156 [2024-05-14 23:50:46.306032] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407973 ] 00:04:17.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.156 [2024-05-14 23:50:46.373383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.156 [2024-05-14 23:50:46.479804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.156 [2024-05-14 23:50:46.479874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.156 [2024-05-14 23:50:46.479947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.156 [2024-05-14 23:50:46.479952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:17.415 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 POWER: Env isn't set yet! 00:04:17.415 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:17.415 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:17.415 POWER: Cannot get available frequencies of lcore 0 00:04:17.415 POWER: Attempting to initialise PSTAT power management... 00:04:17.415 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:17.415 POWER: Initialized successfully for lcore 0 power management 00:04:17.415 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:17.415 POWER: Initialized successfully for lcore 1 power management 00:04:17.415 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:17.415 POWER: Initialized successfully for lcore 2 power management 00:04:17.415 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:17.415 POWER: Initialized successfully for lcore 3 power management 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 [2024-05-14 23:50:46.653348] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 ************************************ 00:04:17.415 START TEST scheduler_create_thread 00:04:17.415 ************************************ 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 2 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 3 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 4 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 5 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 6 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 7 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.415 8 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.415 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.672 9 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.673 10 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.673 23:50:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.238 23:50:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.238 23:50:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:18.238 23:50:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:18.238 23:50:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.238 23:50:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.169 23:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.169 00:04:19.169 real 0m1.753s 00:04:19.169 user 0m0.014s 00:04:19.169 sys 0m0.000s 00:04:19.169 23:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:19.169 23:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.169 ************************************ 00:04:19.169 END TEST scheduler_create_thread 00:04:19.169 ************************************ 00:04:19.169 23:50:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:19.169 23:50:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 407973 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 407973 ']' 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 407973 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 407973 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 407973' 00:04:19.169 killing process with pid 407973 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 407973 00:04:19.169 23:50:48 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 407973 00:04:19.734 [2024-05-14 23:50:48.921374] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:19.734 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:19.734 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:19.734 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:19.734 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:19.734 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:19.734 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:19.734 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:19.734 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:19.993 00:04:19.993 real 0m2.964s 00:04:19.993 user 0m3.811s 00:04:19.993 sys 0m0.336s 00:04:19.993 23:50:49 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:19.993 23:50:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.993 ************************************ 00:04:19.993 END TEST event_scheduler 00:04:19.993 ************************************ 00:04:19.993 23:50:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:19.993 23:50:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:19.993 23:50:49 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:19.993 23:50:49 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:19.993 23:50:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.993 ************************************ 00:04:19.993 START TEST app_repeat 00:04:19.993 ************************************ 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=408329 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 408329' 00:04:19.993 Process app_repeat pid: 408329 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:19.993 spdk_app_start Round 0 00:04:19.993 23:50:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408329 /var/tmp/spdk-nbd.sock 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 408329 ']' 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:19.993 23:50:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:19.993 [2024-05-14 23:50:49.264576] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:19.993 [2024-05-14 23:50:49.264640] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408329 ] 00:04:19.993 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.993 [2024-05-14 23:50:49.333064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.251 [2024-05-14 23:50:49.446799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.251 [2024-05-14 23:50:49.446803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.251 23:50:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:20.251 23:50:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:20.251 23:50:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.509 Malloc0 00:04:20.509 23:50:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.767 Malloc1 00:04:20.767 23:50:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.767 23:50:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:21.024 /dev/nbd0 00:04:21.024 23:50:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:21.024 23:50:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.024 1+0 records in 00:04:21.024 1+0 records out 00:04:21.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197512 s, 20.7 MB/s 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:21.024 23:50:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:21.281 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.281 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.281 23:50:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:21.281 /dev/nbd1 00:04:21.281 23:50:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.281 23:50:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.281 1+0 records in 00:04:21.281 1+0 records out 00:04:21.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196684 s, 20.8 MB/s 00:04:21.281 23:50:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:21.538 23:50:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:21.538 23:50:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:21.538 23:50:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:21.538 23:50:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:21.538 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.538 23:50:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.538 23:50:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.538 23:50:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.538 23:50:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.796 { 00:04:21.796 "nbd_device": "/dev/nbd0", 00:04:21.796 "bdev_name": "Malloc0" 00:04:21.796 }, 00:04:21.796 { 00:04:21.796 "nbd_device": "/dev/nbd1", 00:04:21.796 "bdev_name": "Malloc1" 00:04:21.796 } 00:04:21.796 ]' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.796 { 00:04:21.796 "nbd_device": "/dev/nbd0", 00:04:21.796 "bdev_name": "Malloc0" 00:04:21.796 }, 00:04:21.796 { 00:04:21.796 "nbd_device": "/dev/nbd1", 00:04:21.796 "bdev_name": "Malloc1" 00:04:21.796 } 00:04:21.796 ]' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.796 /dev/nbd1' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.796 /dev/nbd1' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.796 256+0 records in 00:04:21.796 256+0 records out 00:04:21.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509432 s, 206 MB/s 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.796 256+0 records in 00:04:21.796 256+0 records out 00:04:21.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207871 s, 50.4 MB/s 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.796 256+0 records in 00:04:21.796 256+0 records out 00:04:21.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024897 s, 42.1 MB/s 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.796 23:50:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.796 23:50:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.796 23:50:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.797 23:50:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.056 23:50:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.313 23:50:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.571 23:50:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.571 23:50:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.828 23:50:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:23.086 [2024-05-14 23:50:52.353763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.343 [2024-05-14 23:50:52.471406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.343 [2024-05-14 23:50:52.471407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.343 [2024-05-14 23:50:52.533254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.343 [2024-05-14 23:50:52.533329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.867 23:50:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.867 23:50:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:25.867 spdk_app_start Round 1 00:04:25.867 23:50:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408329 /var/tmp/spdk-nbd.sock 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 408329 ']' 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:25.867 23:50:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.125 23:50:55 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:26.125 23:50:55 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:26.125 23:50:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.383 Malloc0 00:04:26.383 23:50:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.641 Malloc1 00:04:26.641 23:50:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.641 23:50:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.899 /dev/nbd0 00:04:26.899 23:50:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.899 23:50:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.899 1+0 records in 00:04:26.899 1+0 records out 00:04:26.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192941 s, 21.2 MB/s 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:26.899 23:50:56 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:26.899 23:50:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.899 23:50:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.899 23:50:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.157 /dev/nbd1 00:04:27.157 23:50:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.157 23:50:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.157 1+0 records in 00:04:27.157 1+0 records out 00:04:27.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240713 s, 17.0 MB/s 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:27.157 23:50:56 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:27.158 23:50:56 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:27.158 23:50:56 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:27.158 23:50:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.158 23:50:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.158 23:50:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.158 23:50:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.158 23:50:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.416 { 00:04:27.416 "nbd_device": "/dev/nbd0", 00:04:27.416 "bdev_name": "Malloc0" 00:04:27.416 }, 00:04:27.416 { 00:04:27.416 "nbd_device": "/dev/nbd1", 00:04:27.416 "bdev_name": "Malloc1" 00:04:27.416 } 00:04:27.416 ]' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.416 { 00:04:27.416 "nbd_device": "/dev/nbd0", 00:04:27.416 "bdev_name": "Malloc0" 00:04:27.416 }, 00:04:27.416 { 00:04:27.416 "nbd_device": "/dev/nbd1", 00:04:27.416 "bdev_name": "Malloc1" 00:04:27.416 } 00:04:27.416 ]' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.416 /dev/nbd1' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.416 /dev/nbd1' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.416 256+0 records in 00:04:27.416 256+0 records out 00:04:27.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501903 s, 209 MB/s 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.416 256+0 records in 00:04:27.416 256+0 records out 00:04:27.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243738 s, 43.0 MB/s 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.416 256+0 records in 00:04:27.416 256+0 records out 00:04:27.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022923 s, 45.7 MB/s 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.416 23:50:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.417 23:50:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.675 23:50:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.933 23:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.192 23:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:28.192 23:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:28.192 23:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.450 23:50:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.450 23:50:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.707 23:50:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.966 [2024-05-14 23:50:58.097834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.966 [2024-05-14 23:50:58.214785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.966 [2024-05-14 23:50:58.214791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.966 [2024-05-14 23:50:58.277860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.966 [2024-05-14 23:50:58.277943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.524 23:51:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.524 23:51:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:31.524 spdk_app_start Round 2 00:04:31.524 23:51:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 408329 /var/tmp/spdk-nbd.sock 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 408329 ']' 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:31.524 23:51:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.782 23:51:01 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:31.782 23:51:01 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:31.782 23:51:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.040 Malloc0 00:04:32.040 23:51:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.298 Malloc1 00:04:32.298 23:51:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.298 23:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:32.557 /dev/nbd0 00:04:32.557 23:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.557 23:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.557 1+0 records in 00:04:32.557 1+0 records out 00:04:32.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001715 s, 23.9 MB/s 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:32.557 23:51:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:32.557 23:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.557 23:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.557 23:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.815 /dev/nbd1 00:04:32.815 23:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.815 23:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.815 1+0 records in 00:04:32.815 1+0 records out 00:04:32.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164853 s, 24.8 MB/s 00:04:32.815 23:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:33.074 23:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:33.074 23:51:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:33.074 23:51:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:33.074 23:51:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.074 { 00:04:33.074 "nbd_device": "/dev/nbd0", 00:04:33.074 "bdev_name": "Malloc0" 00:04:33.074 }, 00:04:33.074 { 00:04:33.074 "nbd_device": "/dev/nbd1", 00:04:33.074 "bdev_name": "Malloc1" 00:04:33.074 } 00:04:33.074 ]' 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.074 { 00:04:33.074 "nbd_device": "/dev/nbd0", 00:04:33.074 "bdev_name": "Malloc0" 00:04:33.074 }, 00:04:33.074 { 00:04:33.074 "nbd_device": "/dev/nbd1", 00:04:33.074 "bdev_name": "Malloc1" 00:04:33.074 } 00:04:33.074 ]' 00:04:33.074 23:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.332 /dev/nbd1' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.332 /dev/nbd1' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.332 256+0 records in 00:04:33.332 256+0 records out 00:04:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422385 s, 248 MB/s 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.332 256+0 records in 00:04:33.332 256+0 records out 00:04:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217636 s, 48.2 MB/s 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.332 256+0 records in 00:04:33.332 256+0 records out 00:04:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251486 s, 41.7 MB/s 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.332 23:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.590 23:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.848 23:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.106 23:51:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.106 23:51:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.364 23:51:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:34.621 [2024-05-14 23:51:03.870650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.879 [2024-05-14 23:51:03.989332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.879 [2024-05-14 23:51:03.989333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.879 [2024-05-14 23:51:04.052433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:34.879 [2024-05-14 23:51:04.052522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.406 23:51:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 408329 /var/tmp/spdk-nbd.sock 00:04:37.406 23:51:06 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 408329 ']' 00:04:37.407 23:51:06 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.407 23:51:06 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:37.407 23:51:06 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.407 23:51:06 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:37.407 23:51:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:37.665 23:51:06 event.app_repeat -- event/event.sh@39 -- # killprocess 408329 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 408329 ']' 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 408329 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 408329 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 408329' 00:04:37.665 killing process with pid 408329 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@965 -- # kill 408329 00:04:37.665 23:51:06 event.app_repeat -- common/autotest_common.sh@970 -- # wait 408329 00:04:37.923 spdk_app_start is called in Round 0. 00:04:37.924 Shutdown signal received, stop current app iteration 00:04:37.924 Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 reinitialization... 00:04:37.924 spdk_app_start is called in Round 1. 00:04:37.924 Shutdown signal received, stop current app iteration 00:04:37.924 Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 reinitialization... 00:04:37.924 spdk_app_start is called in Round 2. 00:04:37.924 Shutdown signal received, stop current app iteration 00:04:37.924 Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 reinitialization... 00:04:37.924 spdk_app_start is called in Round 3. 00:04:37.924 Shutdown signal received, stop current app iteration 00:04:37.924 23:51:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:37.924 23:51:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:37.924 00:04:37.924 real 0m17.865s 00:04:37.924 user 0m38.982s 00:04:37.924 sys 0m3.346s 00:04:37.924 23:51:07 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.924 23:51:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.924 ************************************ 00:04:37.924 END TEST app_repeat 00:04:37.924 ************************************ 00:04:37.924 23:51:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:37.924 23:51:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:37.924 23:51:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.924 23:51:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.924 23:51:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.924 ************************************ 00:04:37.924 START TEST cpu_locks 00:04:37.924 ************************************ 00:04:37.924 23:51:07 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:37.924 * Looking for test storage... 00:04:37.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:37.924 23:51:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:37.924 23:51:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:37.924 23:51:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:37.924 23:51:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:37.924 23:51:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.924 23:51:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.924 23:51:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.924 ************************************ 00:04:37.924 START TEST default_locks 00:04:37.924 ************************************ 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=410823 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 410823 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 410823 ']' 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:37.924 23:51:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.182 [2024-05-14 23:51:07.284116] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:38.182 [2024-05-14 23:51:07.284198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410823 ] 00:04:38.182 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.182 [2024-05-14 23:51:07.357693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.182 [2024-05-14 23:51:07.474356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.117 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.117 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:39.117 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 410823 00:04:39.117 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 410823 00:04:39.117 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.375 lslocks: write error 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 410823 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 410823 ']' 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 410823 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 410823 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 410823' 00:04:39.375 killing process with pid 410823 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 410823 00:04:39.375 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 410823 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 410823 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 410823 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 410823 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 410823 ']' 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (410823) - No such process 00:04:39.634 ERROR: process (pid: 410823) is no longer running 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.634 00:04:39.634 real 0m1.715s 00:04:39.634 user 0m1.862s 00:04:39.634 sys 0m0.535s 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.634 23:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.634 ************************************ 00:04:39.634 END TEST default_locks 00:04:39.634 ************************************ 00:04:39.634 23:51:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:39.634 23:51:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.634 23:51:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.634 23:51:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.892 ************************************ 00:04:39.892 START TEST default_locks_via_rpc 00:04:39.892 ************************************ 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=411585 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 411585 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 411585 ']' 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.892 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.892 [2024-05-14 23:51:09.049303] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:39.892 [2024-05-14 23:51:09.049389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411585 ] 00:04:39.892 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.892 [2024-05-14 23:51:09.119532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.892 [2024-05-14 23:51:09.232362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.151 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.151 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:40.151 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:40.151 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.151 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.408 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.408 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 411585 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 411585 00:04:40.409 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 411585 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 411585 ']' 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 411585 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411585 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411585' 00:04:40.667 killing process with pid 411585 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 411585 00:04:40.667 23:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 411585 00:04:41.234 00:04:41.234 real 0m1.344s 00:04:41.234 user 0m1.283s 00:04:41.234 sys 0m0.535s 00:04:41.234 23:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.234 23:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.234 ************************************ 00:04:41.234 END TEST default_locks_via_rpc 00:04:41.234 ************************************ 00:04:41.234 23:51:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.234 23:51:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.234 23:51:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.234 23:51:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.234 ************************************ 00:04:41.234 START TEST non_locking_app_on_locked_coremask 00:04:41.234 ************************************ 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=411756 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 411756 /var/tmp/spdk.sock 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 411756 ']' 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.234 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.234 [2024-05-14 23:51:10.446042] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:41.234 [2024-05-14 23:51:10.446135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411756 ] 00:04:41.234 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.234 [2024-05-14 23:51:10.513018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.493 [2024-05-14 23:51:10.623328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=411766 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 411766 /var/tmp/spdk2.sock 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 411766 ']' 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.752 23:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.752 [2024-05-14 23:51:10.928013] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:41.752 [2024-05-14 23:51:10.928101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411766 ] 00:04:41.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.752 [2024-05-14 23:51:11.044115] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:41.752 [2024-05-14 23:51:11.044148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.010 [2024-05-14 23:51:11.282506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.576 23:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:42.576 23:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:42.576 23:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 411756 00:04:42.576 23:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 411756 00:04:42.576 23:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.142 lslocks: write error 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 411756 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 411756 ']' 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 411756 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411756 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411756' 00:04:43.142 killing process with pid 411756 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 411756 00:04:43.142 23:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 411756 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 411766 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 411766 ']' 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 411766 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411766 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411766' 00:04:44.077 killing process with pid 411766 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 411766 00:04:44.077 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 411766 00:04:44.666 00:04:44.666 real 0m3.342s 00:04:44.666 user 0m3.457s 00:04:44.666 sys 0m1.100s 00:04:44.666 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.666 23:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.666 ************************************ 00:04:44.666 END TEST non_locking_app_on_locked_coremask 00:04:44.666 ************************************ 00:04:44.666 23:51:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:44.666 23:51:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.666 23:51:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.666 23:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.666 ************************************ 00:04:44.666 START TEST locking_app_on_unlocked_coremask 00:04:44.666 ************************************ 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=412192 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 412192 /var/tmp/spdk.sock 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 412192 ']' 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.666 23:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.666 [2024-05-14 23:51:13.840402] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:44.666 [2024-05-14 23:51:13.840495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412192 ] 00:04:44.666 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.666 [2024-05-14 23:51:13.913583] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.666 [2024-05-14 23:51:13.913629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.925 [2024-05-14 23:51:14.028890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=412328 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 412328 /var/tmp/spdk2.sock 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 412328 ']' 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.491 23:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.491 [2024-05-14 23:51:14.818880] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:45.491 [2024-05-14 23:51:14.818969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412328 ] 00:04:45.749 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.749 [2024-05-14 23:51:14.930994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.008 [2024-05-14 23:51:15.169889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.607 23:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.607 23:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:46.607 23:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 412328 00:04:46.607 23:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412328 00:04:46.607 23:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.173 lslocks: write error 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 412192 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 412192 ']' 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 412192 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 412192 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 412192' 00:04:47.173 killing process with pid 412192 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 412192 00:04:47.173 23:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 412192 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 412328 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 412328 ']' 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 412328 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 412328 00:04:48.107 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:48.108 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:48.108 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 412328' 00:04:48.108 killing process with pid 412328 00:04:48.108 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 412328 00:04:48.108 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 412328 00:04:48.365 00:04:48.365 real 0m3.849s 00:04:48.365 user 0m4.154s 00:04:48.365 sys 0m1.108s 00:04:48.365 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.365 23:51:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.365 ************************************ 00:04:48.365 END TEST locking_app_on_unlocked_coremask 00:04:48.365 ************************************ 00:04:48.365 23:51:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:48.365 23:51:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.365 23:51:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.365 23:51:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.365 ************************************ 00:04:48.365 START TEST locking_app_on_locked_coremask 00:04:48.366 ************************************ 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=412668 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 412668 /var/tmp/spdk.sock 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 412668 ']' 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:48.366 23:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.624 [2024-05-14 23:51:17.754157] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:48.624 [2024-05-14 23:51:17.754255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412668 ] 00:04:48.624 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.624 [2024-05-14 23:51:17.840322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.882 [2024-05-14 23:51:17.976065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=412775 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 412775 /var/tmp/spdk2.sock 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 412775 /var/tmp/spdk2.sock 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 412775 /var/tmp/spdk2.sock 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 412775 ']' 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.450 23:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.450 [2024-05-14 23:51:18.735894] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:49.450 [2024-05-14 23:51:18.736009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412775 ] 00:04:49.450 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.708 [2024-05-14 23:51:18.848760] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 412668 has claimed it. 00:04:49.708 [2024-05-14 23:51:18.848810] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:50.271 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (412775) - No such process 00:04:50.271 ERROR: process (pid: 412775) is no longer running 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 412668 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412668 00:04:50.271 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.837 lslocks: write error 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 412668 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 412668 ']' 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 412668 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 412668 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 412668' 00:04:50.837 killing process with pid 412668 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 412668 00:04:50.837 23:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 412668 00:04:51.095 00:04:51.095 real 0m2.664s 00:04:51.095 user 0m2.980s 00:04:51.095 sys 0m0.735s 00:04:51.095 23:51:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.095 23:51:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.095 ************************************ 00:04:51.095 END TEST locking_app_on_locked_coremask 00:04:51.095 ************************************ 00:04:51.095 23:51:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:51.095 23:51:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.095 23:51:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.095 23:51:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.095 ************************************ 00:04:51.095 START TEST locking_overlapped_coremask 00:04:51.095 ************************************ 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=413067 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 413067 /var/tmp/spdk.sock 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 413067 ']' 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.095 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.354 [2024-05-14 23:51:20.472292] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:51.354 [2024-05-14 23:51:20.472358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413067 ] 00:04:51.354 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.354 [2024-05-14 23:51:20.546732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.354 [2024-05-14 23:51:20.673956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.354 [2024-05-14 23:51:20.674027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.354 [2024-05-14 23:51:20.674032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=413074 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 413074 /var/tmp/spdk2.sock 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 413074 /var/tmp/spdk2.sock 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 413074 /var/tmp/spdk2.sock 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 413074 ']' 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.612 23:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.870 [2024-05-14 23:51:20.984031] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:51.870 [2024-05-14 23:51:20.984117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413074 ] 00:04:51.870 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.870 [2024-05-14 23:51:21.096252] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413067 has claimed it. 00:04:51.870 [2024-05-14 23:51:21.096325] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:52.436 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (413074) - No such process 00:04:52.436 ERROR: process (pid: 413074) is no longer running 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 413067 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 413067 ']' 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 413067 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 413067 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 413067' 00:04:52.436 killing process with pid 413067 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 413067 00:04:52.436 23:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 413067 00:04:53.001 00:04:53.001 real 0m1.763s 00:04:53.001 user 0m4.625s 00:04:53.001 sys 0m0.505s 00:04:53.001 23:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.001 23:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.001 ************************************ 00:04:53.001 END TEST locking_overlapped_coremask 00:04:53.001 ************************************ 00:04:53.001 23:51:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:53.001 23:51:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.001 23:51:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.001 23:51:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.001 ************************************ 00:04:53.001 START TEST locking_overlapped_coremask_via_rpc 00:04:53.001 ************************************ 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=413360 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 413360 /var/tmp/spdk.sock 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 413360 ']' 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.002 23:51:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.002 [2024-05-14 23:51:22.292843] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:53.002 [2024-05-14 23:51:22.292943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413360 ] 00:04:53.002 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.260 [2024-05-14 23:51:22.366581] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.260 [2024-05-14 23:51:22.366631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.260 [2024-05-14 23:51:22.510633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.260 [2024-05-14 23:51:22.510691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.260 [2024-05-14 23:51:22.510699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=413382 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 413382 /var/tmp/spdk2.sock 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 413382 ']' 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.194 23:51:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.194 [2024-05-14 23:51:23.267384] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:54.194 [2024-05-14 23:51:23.267467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413382 ] 00:04:54.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.194 [2024-05-14 23:51:23.376723] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.194 [2024-05-14 23:51:23.376759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.453 [2024-05-14 23:51:23.605211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.453 [2024-05-14 23:51:23.605259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:54.453 [2024-05-14 23:51:23.605261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.017 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.018 [2024-05-14 23:51:24.228026] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413360 has claimed it. 00:04:55.018 request: 00:04:55.018 { 00:04:55.018 "method": "framework_enable_cpumask_locks", 00:04:55.018 "req_id": 1 00:04:55.018 } 00:04:55.018 Got JSON-RPC error response 00:04:55.018 response: 00:04:55.018 { 00:04:55.018 "code": -32603, 00:04:55.018 "message": "Failed to claim CPU core: 2" 00:04:55.018 } 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 413360 /var/tmp/spdk.sock 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 413360 ']' 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.018 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 413382 /var/tmp/spdk2.sock 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 413382 ']' 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.274 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.531 00:04:55.531 real 0m2.508s 00:04:55.531 user 0m1.229s 00:04:55.531 sys 0m0.212s 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.531 23:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.531 ************************************ 00:04:55.531 END TEST locking_overlapped_coremask_via_rpc 00:04:55.531 ************************************ 00:04:55.531 23:51:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:55.531 23:51:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413360 ]] 00:04:55.531 23:51:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413360 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 413360 ']' 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 413360 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 413360 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 413360' 00:04:55.531 killing process with pid 413360 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 413360 00:04:55.531 23:51:24 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 413360 00:04:56.098 23:51:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 413382 ]] 00:04:56.098 23:51:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 413382 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 413382 ']' 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 413382 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 413382 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 413382' 00:04:56.098 killing process with pid 413382 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 413382 00:04:56.098 23:51:25 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 413382 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413360 ]] 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413360 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 413360 ']' 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 413360 00:04:56.664 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (413360) - No such process 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 413360 is not found' 00:04:56.664 Process with pid 413360 is not found 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 413382 ]] 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 413382 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 413382 ']' 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 413382 00:04:56.664 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (413382) - No such process 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 413382 is not found' 00:04:56.664 Process with pid 413382 is not found 00:04:56.664 23:51:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.664 00:04:56.664 real 0m18.576s 00:04:56.664 user 0m31.961s 00:04:56.664 sys 0m5.681s 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.664 23:51:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.664 ************************************ 00:04:56.664 END TEST cpu_locks 00:04:56.664 ************************************ 00:04:56.664 00:04:56.664 real 0m43.765s 00:04:56.664 user 1m21.589s 00:04:56.664 sys 0m9.888s 00:04:56.664 23:51:25 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.664 23:51:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.664 ************************************ 00:04:56.664 END TEST event 00:04:56.664 ************************************ 00:04:56.664 23:51:25 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:56.664 23:51:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.664 23:51:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.664 23:51:25 -- common/autotest_common.sh@10 -- # set +x 00:04:56.664 ************************************ 00:04:56.664 START TEST thread 00:04:56.664 ************************************ 00:04:56.664 23:51:25 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:56.664 * Looking for test storage... 00:04:56.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:04:56.664 23:51:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.664 23:51:25 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:56.664 23:51:25 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.664 23:51:25 thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.664 ************************************ 00:04:56.664 START TEST thread_poller_perf 00:04:56.664 ************************************ 00:04:56.664 23:51:25 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.664 [2024-05-14 23:51:25.904637] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:56.664 [2024-05-14 23:51:25.904702] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413867 ] 00:04:56.664 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.664 [2024-05-14 23:51:25.986028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.922 [2024-05-14 23:51:26.105768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.922 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:58.295 ====================================== 00:04:58.295 busy:2712180066 (cyc) 00:04:58.295 total_run_count: 297000 00:04:58.295 tsc_hz: 2700000000 (cyc) 00:04:58.295 ====================================== 00:04:58.295 poller_cost: 9131 (cyc), 3381 (nsec) 00:04:58.295 00:04:58.295 real 0m1.331s 00:04:58.295 user 0m1.226s 00:04:58.295 sys 0m0.099s 00:04:58.295 23:51:27 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.295 23:51:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.295 ************************************ 00:04:58.295 END TEST thread_poller_perf 00:04:58.295 ************************************ 00:04:58.295 23:51:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:58.295 23:51:27 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:58.295 23:51:27 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.295 23:51:27 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.295 ************************************ 00:04:58.295 START TEST thread_poller_perf 00:04:58.295 ************************************ 00:04:58.295 23:51:27 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:58.295 [2024-05-14 23:51:27.284043] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:58.295 [2024-05-14 23:51:27.284102] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414026 ] 00:04:58.295 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.295 [2024-05-14 23:51:27.357173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.295 [2024-05-14 23:51:27.475594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.295 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:59.669 ====================================== 00:04:59.669 busy:2702690584 (cyc) 00:04:59.669 total_run_count: 3855000 00:04:59.669 tsc_hz: 2700000000 (cyc) 00:04:59.669 ====================================== 00:04:59.669 poller_cost: 701 (cyc), 259 (nsec) 00:04:59.669 00:04:59.669 real 0m1.331s 00:04:59.669 user 0m1.231s 00:04:59.669 sys 0m0.094s 00:04:59.669 23:51:28 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.669 23:51:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.669 ************************************ 00:04:59.669 END TEST thread_poller_perf 00:04:59.669 ************************************ 00:04:59.669 23:51:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:59.669 00:04:59.669 real 0m2.812s 00:04:59.669 user 0m2.520s 00:04:59.669 sys 0m0.287s 00:04:59.669 23:51:28 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.669 23:51:28 thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.669 ************************************ 00:04:59.669 END TEST thread 00:04:59.669 ************************************ 00:04:59.669 23:51:28 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:04:59.669 23:51:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.669 23:51:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.669 23:51:28 -- common/autotest_common.sh@10 -- # set +x 00:04:59.669 ************************************ 00:04:59.669 START TEST accel 00:04:59.669 ************************************ 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:04:59.669 * Looking for test storage... 00:04:59.669 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:04:59.669 23:51:28 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:59.669 23:51:28 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:59.669 23:51:28 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.669 23:51:28 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=414219 00:04:59.669 23:51:28 accel -- accel/accel.sh@63 -- # waitforlisten 414219 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@827 -- # '[' -z 414219 ']' 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.669 23:51:28 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.669 23:51:28 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.669 23:51:28 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.669 23:51:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.669 23:51:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.669 23:51:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.669 23:51:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.669 23:51:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.669 23:51:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:59.669 23:51:28 accel -- accel/accel.sh@41 -- # jq -r . 00:04:59.669 [2024-05-14 23:51:28.769304] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:04:59.669 [2024-05-14 23:51:28.769397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414219 ] 00:04:59.669 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.669 [2024-05-14 23:51:28.837842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.669 [2024-05-14 23:51:28.946514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@860 -- # return 0 00:04:59.928 23:51:29 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:59.928 23:51:29 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:59.928 23:51:29 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:59.928 23:51:29 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:59.928 23:51:29 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:59.928 23:51:29 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.928 23:51:29 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # IFS== 00:04:59.928 23:51:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:59.928 23:51:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:59.928 23:51:29 accel -- accel/accel.sh@75 -- # killprocess 414219 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@946 -- # '[' -z 414219 ']' 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@950 -- # kill -0 414219 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@951 -- # uname 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:59.928 23:51:29 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 414219 00:05:00.186 23:51:29 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:00.186 23:51:29 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:00.186 23:51:29 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 414219' 00:05:00.186 killing process with pid 414219 00:05:00.186 23:51:29 accel -- common/autotest_common.sh@965 -- # kill 414219 00:05:00.186 23:51:29 accel -- common/autotest_common.sh@970 -- # wait 414219 00:05:00.445 23:51:29 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:00.445 23:51:29 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:00.445 23:51:29 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:00.445 23:51:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.445 23:51:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.445 23:51:29 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:00.445 23:51:29 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:00.704 23:51:29 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.704 23:51:29 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:00.704 23:51:29 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:00.704 23:51:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:00.704 23:51:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.704 23:51:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.704 ************************************ 00:05:00.704 START TEST accel_missing_filename 00:05:00.704 ************************************ 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.704 23:51:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:00.704 23:51:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:00.704 [2024-05-14 23:51:29.866625] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:00.704 [2024-05-14 23:51:29.866688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414387 ] 00:05:00.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.704 [2024-05-14 23:51:29.940657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.962 [2024-05-14 23:51:30.069006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.962 [2024-05-14 23:51:30.131621] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.962 [2024-05-14 23:51:30.213547] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:01.220 A filename is required. 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.220 00:05:01.220 real 0m0.496s 00:05:01.220 user 0m0.379s 00:05:01.220 sys 0m0.151s 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.220 23:51:30 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:01.220 ************************************ 00:05:01.220 END TEST accel_missing_filename 00:05:01.220 ************************************ 00:05:01.220 23:51:30 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:01.220 23:51:30 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:01.220 23:51:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.220 23:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.220 ************************************ 00:05:01.220 START TEST accel_compress_verify 00:05:01.220 ************************************ 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.220 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:01.220 23:51:30 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:01.220 [2024-05-14 23:51:30.417641] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:01.220 [2024-05-14 23:51:30.417706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414534 ] 00:05:01.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.220 [2024-05-14 23:51:30.490110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.489 [2024-05-14 23:51:30.613220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.489 [2024-05-14 23:51:30.675425] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.489 [2024-05-14 23:51:30.763882] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:01.752 00:05:01.752 Compression does not support the verify option, aborting. 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.752 00:05:01.752 real 0m0.495s 00:05:01.752 user 0m0.369s 00:05:01.752 sys 0m0.158s 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.752 23:51:30 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:01.752 ************************************ 00:05:01.752 END TEST accel_compress_verify 00:05:01.752 ************************************ 00:05:01.752 23:51:30 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:01.752 23:51:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:01.752 23:51:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.752 23:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.752 ************************************ 00:05:01.752 START TEST accel_wrong_workload 00:05:01.752 ************************************ 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.752 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:01.752 23:51:30 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:01.752 Unsupported workload type: foobar 00:05:01.752 [2024-05-14 23:51:30.963787] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:01.752 accel_perf options: 00:05:01.752 [-h help message] 00:05:01.752 [-q queue depth per core] 00:05:01.752 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:01.752 [-T number of threads per core 00:05:01.752 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:01.752 [-t time in seconds] 00:05:01.752 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:01.752 [ dif_verify, , dif_generate, dif_generate_copy 00:05:01.752 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:01.752 [-l for compress/decompress workloads, name of uncompressed input file 00:05:01.752 [-S for crc32c workload, use this seed value (default 0) 00:05:01.753 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:01.753 [-f for fill workload, use this BYTE value (default 255) 00:05:01.753 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:01.753 [-y verify result if this switch is on] 00:05:01.753 [-a tasks to allocate per core (default: same value as -q)] 00:05:01.753 Can be used to spread operations across a wider range of memory. 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.753 00:05:01.753 real 0m0.023s 00:05:01.753 user 0m0.009s 00:05:01.753 sys 0m0.013s 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.753 23:51:30 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 END TEST accel_wrong_workload 00:05:01.753 ************************************ 00:05:01.753 Error: writing output failed: Broken pipe 00:05:01.753 23:51:30 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:01.753 23:51:30 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:01.753 23:51:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.753 23:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 START TEST accel_negative_buffers 00:05:01.753 ************************************ 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:01.753 23:51:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:01.753 -x option must be non-negative. 00:05:01.753 [2024-05-14 23:51:31.036953] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:01.753 accel_perf options: 00:05:01.753 [-h help message] 00:05:01.753 [-q queue depth per core] 00:05:01.753 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:01.753 [-T number of threads per core 00:05:01.753 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:01.753 [-t time in seconds] 00:05:01.753 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:01.753 [ dif_verify, , dif_generate, dif_generate_copy 00:05:01.753 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:01.753 [-l for compress/decompress workloads, name of uncompressed input file 00:05:01.753 [-S for crc32c workload, use this seed value (default 0) 00:05:01.753 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:01.753 [-f for fill workload, use this BYTE value (default 255) 00:05:01.753 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:01.753 [-y verify result if this switch is on] 00:05:01.753 [-a tasks to allocate per core (default: same value as -q)] 00:05:01.753 Can be used to spread operations across a wider range of memory. 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.753 00:05:01.753 real 0m0.023s 00:05:01.753 user 0m0.012s 00:05:01.753 sys 0m0.011s 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.753 23:51:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 END TEST accel_negative_buffers 00:05:01.753 ************************************ 00:05:01.753 Error: writing output failed: Broken pipe 00:05:01.753 23:51:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:01.753 23:51:31 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:01.753 23:51:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.753 23:51:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 START TEST accel_crc32c 00:05:01.753 ************************************ 00:05:01.753 23:51:31 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:01.753 23:51:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:02.012 [2024-05-14 23:51:31.107578] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:02.012 [2024-05-14 23:51:31.107652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414627 ] 00:05:02.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.012 [2024-05-14 23:51:31.181133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.012 [2024-05-14 23:51:31.304374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.270 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:02.271 23:51:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:03.645 23:51:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.645 00:05:03.645 real 0m1.489s 00:05:03.645 user 0m1.340s 00:05:03.645 sys 0m0.152s 00:05:03.645 23:51:32 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.645 23:51:32 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:03.645 ************************************ 00:05:03.645 END TEST accel_crc32c 00:05:03.645 ************************************ 00:05:03.645 23:51:32 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:03.645 23:51:32 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:03.646 23:51:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.646 23:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 ************************************ 00:05:03.646 START TEST accel_crc32c_C2 00:05:03.646 ************************************ 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:03.646 [2024-05-14 23:51:32.651253] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:03.646 [2024-05-14 23:51:32.651317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414879 ] 00:05:03.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.646 [2024-05-14 23:51:32.729213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.646 [2024-05-14 23:51:32.874502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.646 23:51:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.017 00:05:05.017 real 0m1.531s 00:05:05.017 user 0m1.364s 00:05:05.017 sys 0m0.167s 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.017 23:51:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:05.017 ************************************ 00:05:05.017 END TEST accel_crc32c_C2 00:05:05.017 ************************************ 00:05:05.017 23:51:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:05.017 23:51:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:05.017 23:51:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.017 23:51:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.017 ************************************ 00:05:05.017 START TEST accel_copy 00:05:05.017 ************************************ 00:05:05.017 23:51:34 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:05.017 23:51:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:05.017 23:51:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:05.017 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.017 23:51:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:05.017 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:05.018 23:51:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:05.018 [2024-05-14 23:51:34.235395] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:05.018 [2024-05-14 23:51:34.235459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415040 ] 00:05:05.018 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.018 [2024-05-14 23:51:34.308328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.276 [2024-05-14 23:51:34.432614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:05.276 23:51:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:06.649 23:51:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.649 00:05:06.649 real 0m1.495s 00:05:06.649 user 0m1.338s 00:05:06.649 sys 0m0.157s 00:05:06.649 23:51:35 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.649 23:51:35 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:06.649 ************************************ 00:05:06.649 END TEST accel_copy 00:05:06.649 ************************************ 00:05:06.649 23:51:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.649 23:51:35 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:06.649 23:51:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.649 23:51:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.649 ************************************ 00:05:06.649 START TEST accel_fill 00:05:06.649 ************************************ 00:05:06.649 23:51:35 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:06.649 23:51:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:06.649 [2024-05-14 23:51:35.785845] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:06.649 [2024-05-14 23:51:35.785912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415311 ] 00:05:06.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.649 [2024-05-14 23:51:35.863724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.649 [2024-05-14 23:51:35.983476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.907 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:06.908 23:51:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:08.281 23:51:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.281 00:05:08.281 real 0m1.498s 00:05:08.281 user 0m1.338s 00:05:08.281 sys 0m0.161s 00:05:08.281 23:51:37 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.281 23:51:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:08.281 ************************************ 00:05:08.281 END TEST accel_fill 00:05:08.281 ************************************ 00:05:08.281 23:51:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:08.281 23:51:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:08.281 23:51:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.281 23:51:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.281 ************************************ 00:05:08.281 START TEST accel_copy_crc32c 00:05:08.281 ************************************ 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:08.281 [2024-05-14 23:51:37.333779] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:08.281 [2024-05-14 23:51:37.333844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415473 ] 00:05:08.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.281 [2024-05-14 23:51:37.412564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.281 [2024-05-14 23:51:37.535182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.281 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.282 23:51:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.655 00:05:09.655 real 0m1.490s 00:05:09.655 user 0m1.342s 00:05:09.655 sys 0m0.151s 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.655 23:51:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:09.655 ************************************ 00:05:09.655 END TEST accel_copy_crc32c 00:05:09.655 ************************************ 00:05:09.655 23:51:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:09.655 23:51:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:09.655 23:51:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.655 23:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.655 ************************************ 00:05:09.655 START TEST accel_copy_crc32c_C2 00:05:09.655 ************************************ 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.655 23:51:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:09.655 [2024-05-14 23:51:38.879331] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:09.655 [2024-05-14 23:51:38.879395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415631 ] 00:05:09.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.655 [2024-05-14 23:51:38.956470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.913 [2024-05-14 23:51:39.079540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.913 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.914 23:51:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.287 00:05:11.287 real 0m1.495s 00:05:11.287 user 0m1.339s 00:05:11.287 sys 0m0.158s 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.287 23:51:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:11.287 ************************************ 00:05:11.287 END TEST accel_copy_crc32c_C2 00:05:11.287 ************************************ 00:05:11.287 23:51:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:11.287 23:51:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:11.287 23:51:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.287 23:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.287 ************************************ 00:05:11.287 START TEST accel_dualcast 00:05:11.287 ************************************ 00:05:11.287 23:51:40 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:11.287 23:51:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:11.287 [2024-05-14 23:51:40.427011] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:11.287 [2024-05-14 23:51:40.427076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415906 ] 00:05:11.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.287 [2024-05-14 23:51:40.502640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.287 [2024-05-14 23:51:40.623601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:11.546 23:51:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:12.924 23:51:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.924 00:05:12.924 real 0m1.500s 00:05:12.924 user 0m1.336s 00:05:12.924 sys 0m0.166s 00:05:12.924 23:51:41 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.924 23:51:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:12.924 ************************************ 00:05:12.924 END TEST accel_dualcast 00:05:12.924 ************************************ 00:05:12.924 23:51:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:12.924 23:51:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:12.924 23:51:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.924 23:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.924 ************************************ 00:05:12.924 START TEST accel_compare 00:05:12.924 ************************************ 00:05:12.924 23:51:41 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:12.924 23:51:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:12.924 [2024-05-14 23:51:41.980969] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:12.924 [2024-05-14 23:51:41.981038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416064 ] 00:05:12.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.924 [2024-05-14 23:51:42.056351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.924 [2024-05-14 23:51:42.187257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.924 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.925 23:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:14.299 23:51:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.299 00:05:14.299 real 0m1.496s 00:05:14.299 user 0m1.338s 00:05:14.299 sys 0m0.159s 00:05:14.299 23:51:43 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.299 23:51:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:14.299 ************************************ 00:05:14.299 END TEST accel_compare 00:05:14.299 ************************************ 00:05:14.299 23:51:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:14.300 23:51:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:14.300 23:51:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.300 23:51:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.300 ************************************ 00:05:14.300 START TEST accel_xor 00:05:14.300 ************************************ 00:05:14.300 23:51:43 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:14.300 23:51:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:14.300 [2024-05-14 23:51:43.531070] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:14.300 [2024-05-14 23:51:43.531137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416226 ] 00:05:14.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.300 [2024-05-14 23:51:43.606695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.558 [2024-05-14 23:51:43.727855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.558 23:51:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.929 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.930 00:05:15.930 real 0m1.495s 00:05:15.930 user 0m1.341s 00:05:15.930 sys 0m0.156s 00:05:15.930 23:51:45 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.930 23:51:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:15.930 ************************************ 00:05:15.930 END TEST accel_xor 00:05:15.930 ************************************ 00:05:15.930 23:51:45 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:15.930 23:51:45 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:15.930 23:51:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.930 23:51:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.930 ************************************ 00:05:15.930 START TEST accel_xor 00:05:15.930 ************************************ 00:05:15.930 23:51:45 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:15.930 23:51:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:15.930 [2024-05-14 23:51:45.077822] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:15.930 [2024-05-14 23:51:45.077885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416498 ] 00:05:15.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.930 [2024-05-14 23:51:45.150762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.930 [2024-05-14 23:51:45.274047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.188 23:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:17.559 23:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.559 00:05:17.559 real 0m1.499s 00:05:17.559 user 0m1.338s 00:05:17.559 sys 0m0.162s 00:05:17.559 23:51:46 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.559 23:51:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:17.559 ************************************ 00:05:17.559 END TEST accel_xor 00:05:17.559 ************************************ 00:05:17.559 23:51:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:17.559 23:51:46 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:17.559 23:51:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.559 23:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.559 ************************************ 00:05:17.559 START TEST accel_dif_verify 00:05:17.559 ************************************ 00:05:17.559 23:51:46 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:17.559 [2024-05-14 23:51:46.633105] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:17.559 [2024-05-14 23:51:46.633168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416656 ] 00:05:17.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.559 [2024-05-14 23:51:46.708176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.559 [2024-05-14 23:51:46.836742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.559 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.560 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:17.817 23:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:18.784 23:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.784 00:05:18.784 real 0m1.496s 00:05:18.784 user 0m1.337s 00:05:18.784 sys 0m0.163s 00:05:18.784 23:51:48 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.784 23:51:48 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 ************************************ 00:05:18.784 END TEST accel_dif_verify 00:05:18.784 ************************************ 00:05:19.042 23:51:48 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:19.042 23:51:48 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:19.042 23:51:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.042 23:51:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.042 ************************************ 00:05:19.042 START TEST accel_dif_generate 00:05:19.042 ************************************ 00:05:19.042 23:51:48 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:19.042 23:51:48 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:19.042 [2024-05-14 23:51:48.179264] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:19.042 [2024-05-14 23:51:48.179329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416877 ] 00:05:19.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.042 [2024-05-14 23:51:48.252157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.042 [2024-05-14 23:51:48.376132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.301 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.302 23:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:20.676 23:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.676 00:05:20.676 real 0m1.493s 00:05:20.676 user 0m1.341s 00:05:20.676 sys 0m0.155s 00:05:20.677 23:51:49 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.677 23:51:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:20.677 ************************************ 00:05:20.677 END TEST accel_dif_generate 00:05:20.677 ************************************ 00:05:20.677 23:51:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:20.677 23:51:49 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:20.677 23:51:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.677 23:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.677 ************************************ 00:05:20.677 START TEST accel_dif_generate_copy 00:05:20.677 ************************************ 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:20.677 [2024-05-14 23:51:49.723160] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:20.677 [2024-05-14 23:51:49.723228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417095 ] 00:05:20.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.677 [2024-05-14 23:51:49.799186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.677 [2024-05-14 23:51:49.920890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.677 23:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.050 00:05:22.050 real 0m1.501s 00:05:22.050 user 0m1.345s 00:05:22.050 sys 0m0.158s 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.050 23:51:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:22.050 ************************************ 00:05:22.050 END TEST accel_dif_generate_copy 00:05:22.050 ************************************ 00:05:22.050 23:51:51 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:22.050 23:51:51 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:22.050 23:51:51 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:22.050 23:51:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.050 23:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.050 ************************************ 00:05:22.050 START TEST accel_comp 00:05:22.050 ************************************ 00:05:22.050 23:51:51 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:22.050 23:51:51 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:22.050 [2024-05-14 23:51:51.277013] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:22.050 [2024-05-14 23:51:51.277079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417246 ] 00:05:22.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.050 [2024-05-14 23:51:51.349070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.309 [2024-05-14 23:51:51.471750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.309 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.310 23:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.683 23:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.684 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.684 23:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.684 23:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.684 23:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:23.684 23:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.684 00:05:23.684 real 0m1.490s 00:05:23.684 user 0m1.344s 00:05:23.684 sys 0m0.148s 00:05:23.684 23:51:52 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.684 23:51:52 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:23.684 ************************************ 00:05:23.684 END TEST accel_comp 00:05:23.684 ************************************ 00:05:23.684 23:51:52 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:23.684 23:51:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:23.684 23:51:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.684 23:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.684 ************************************ 00:05:23.684 START TEST accel_decomp 00:05:23.684 ************************************ 00:05:23.684 23:51:52 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:23.684 23:51:52 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:23.684 [2024-05-14 23:51:52.817805] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:23.684 [2024-05-14 23:51:52.817871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417525 ] 00:05:23.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.684 [2024-05-14 23:51:52.891225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.684 [2024-05-14 23:51:53.014028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.942 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.943 23:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:25.328 23:51:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.328 00:05:25.328 real 0m1.492s 00:05:25.328 user 0m1.343s 00:05:25.328 sys 0m0.152s 00:05:25.328 23:51:54 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.328 23:51:54 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:25.328 ************************************ 00:05:25.328 END TEST accel_decomp 00:05:25.328 ************************************ 00:05:25.328 23:51:54 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:25.328 23:51:54 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:25.328 23:51:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.328 23:51:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.328 ************************************ 00:05:25.328 START TEST accel_decmop_full 00:05:25.328 ************************************ 00:05:25.328 23:51:54 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:25.328 23:51:54 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:25.329 [2024-05-14 23:51:54.360699] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:25.329 [2024-05-14 23:51:54.360764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417681 ] 00:05:25.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.329 [2024-05-14 23:51:54.434209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.329 [2024-05-14 23:51:54.556941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 23:51:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:26.700 23:51:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.700 00:05:26.700 real 0m1.513s 00:05:26.700 user 0m1.367s 00:05:26.700 sys 0m0.149s 00:05:26.700 23:51:55 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.700 23:51:55 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 ************************************ 00:05:26.700 END TEST accel_decmop_full 00:05:26.700 ************************************ 00:05:26.700 23:51:55 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:26.700 23:51:55 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:26.700 23:51:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.700 23:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 ************************************ 00:05:26.700 START TEST accel_decomp_mcore 00:05:26.700 ************************************ 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:26.700 23:51:55 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:26.700 [2024-05-14 23:51:55.930136] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:26.700 [2024-05-14 23:51:55.930197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417840 ] 00:05:26.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.700 [2024-05-14 23:51:56.005720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.958 [2024-05-14 23:51:56.130686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.958 [2024-05-14 23:51:56.130748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.958 [2024-05-14 23:51:56.130811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.958 [2024-05-14 23:51:56.130815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.958 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:26.959 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:26.959 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:26.959 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:26.959 23:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.332 00:05:28.332 real 0m1.499s 00:05:28.332 user 0m4.777s 00:05:28.332 sys 0m0.159s 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.332 23:51:57 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 ************************************ 00:05:28.332 END TEST accel_decomp_mcore 00:05:28.332 ************************************ 00:05:28.332 23:51:57 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.332 23:51:57 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:28.332 23:51:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.332 23:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 ************************************ 00:05:28.332 START TEST accel_decomp_full_mcore 00:05:28.332 ************************************ 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:28.332 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:28.332 [2024-05-14 23:51:57.479064] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:28.332 [2024-05-14 23:51:57.479130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418114 ] 00:05:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.332 [2024-05-14 23:51:57.559297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.590 [2024-05-14 23:51:57.686363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.590 [2024-05-14 23:51:57.686416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.590 [2024-05-14 23:51:57.686468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.590 [2024-05-14 23:51:57.686472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.590 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.591 23:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.963 00:05:29.963 real 0m1.531s 00:05:29.963 user 0m4.871s 00:05:29.963 sys 0m0.169s 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.963 23:51:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:29.963 ************************************ 00:05:29.963 END TEST accel_decomp_full_mcore 00:05:29.963 ************************************ 00:05:29.963 23:51:59 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:29.963 23:51:59 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:29.963 23:51:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.963 23:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.963 ************************************ 00:05:29.963 START TEST accel_decomp_mthread 00:05:29.963 ************************************ 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:29.963 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:29.963 [2024-05-14 23:51:59.064724] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:29.963 [2024-05-14 23:51:59.064788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418281 ] 00:05:29.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.964 [2024-05-14 23:51:59.139307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.964 [2024-05-14 23:51:59.261766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.222 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.223 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.223 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:30.223 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:30.223 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.223 23:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.596 00:05:31.596 real 0m1.509s 00:05:31.596 user 0m1.358s 00:05:31.596 sys 0m0.153s 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.596 23:52:00 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:31.596 ************************************ 00:05:31.596 END TEST accel_decomp_mthread 00:05:31.596 ************************************ 00:05:31.596 23:52:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:31.596 23:52:00 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:31.596 23:52:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.596 23:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.596 ************************************ 00:05:31.596 START TEST accel_decomp_full_mthread 00:05:31.596 ************************************ 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:31.596 [2024-05-14 23:52:00.626031] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:31.596 [2024-05-14 23:52:00.626095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418511 ] 00:05:31.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.596 [2024-05-14 23:52:00.703896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.596 [2024-05-14 23:52:00.824110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.596 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.597 23:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.967 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.968 00:05:32.968 real 0m1.537s 00:05:32.968 user 0m1.380s 00:05:32.968 sys 0m0.160s 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.968 23:52:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:32.968 ************************************ 00:05:32.968 END TEST accel_decomp_full_mthread 00:05:32.968 ************************************ 00:05:32.968 23:52:02 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:32.968 23:52:02 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:32.968 23:52:02 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:32.968 23:52:02 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:32.968 23:52:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.968 23:52:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.968 23:52:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.968 23:52:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.968 23:52:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.968 23:52:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.968 23:52:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.968 23:52:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:32.968 23:52:02 accel -- accel/accel.sh@41 -- # jq -r . 00:05:32.968 ************************************ 00:05:32.968 START TEST accel_dif_functional_tests 00:05:32.968 ************************************ 00:05:32.968 23:52:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:32.968 [2024-05-14 23:52:02.235549] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:32.968 [2024-05-14 23:52:02.235607] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418714 ] 00:05:32.968 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.968 [2024-05-14 23:52:02.306558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.226 [2024-05-14 23:52:02.431992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.227 [2024-05-14 23:52:02.432047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.227 [2024-05-14 23:52:02.432052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.227 00:05:33.227 00:05:33.227 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.227 http://cunit.sourceforge.net/ 00:05:33.227 00:05:33.227 00:05:33.227 Suite: accel_dif 00:05:33.227 Test: verify: DIF generated, GUARD check ...passed 00:05:33.227 Test: verify: DIF generated, APPTAG check ...passed 00:05:33.227 Test: verify: DIF generated, REFTAG check ...passed 00:05:33.227 Test: verify: DIF not generated, GUARD check ...[2024-05-14 23:52:02.534472] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:33.227 [2024-05-14 23:52:02.534538] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:33.227 passed 00:05:33.227 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 23:52:02.534581] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:33.227 [2024-05-14 23:52:02.534611] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:33.227 passed 00:05:33.227 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 23:52:02.534646] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:33.227 [2024-05-14 23:52:02.534677] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:33.227 passed 00:05:33.227 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:33.227 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-14 23:52:02.534751] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:33.227 passed 00:05:33.227 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:33.227 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:33.227 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:33.227 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-14 23:52:02.534910] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:33.227 passed 00:05:33.227 Test: generate copy: DIF generated, GUARD check ...passed 00:05:33.227 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:33.227 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:33.227 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:33.227 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:33.227 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:33.227 Test: generate copy: iovecs-len validate ...[2024-05-14 23:52:02.535198] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:33.227 passed 00:05:33.227 Test: generate copy: buffer alignment validate ...passed 00:05:33.227 00:05:33.227 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.227 suites 1 1 n/a 0 0 00:05:33.227 tests 20 20 20 0 0 00:05:33.227 asserts 204 204 204 0 n/a 00:05:33.227 00:05:33.227 Elapsed time = 0.003 seconds 00:05:33.486 00:05:33.486 real 0m0.609s 00:05:33.486 user 0m0.914s 00:05:33.486 sys 0m0.198s 00:05:33.486 23:52:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.486 23:52:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:33.486 ************************************ 00:05:33.486 END TEST accel_dif_functional_tests 00:05:33.486 ************************************ 00:05:33.486 00:05:33.486 real 0m34.157s 00:05:33.486 user 0m37.156s 00:05:33.486 sys 0m4.996s 00:05:33.486 23:52:02 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.486 23:52:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.486 ************************************ 00:05:33.486 END TEST accel 00:05:33.486 ************************************ 00:05:33.744 23:52:02 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:33.744 23:52:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.744 23:52:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.744 23:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:33.744 ************************************ 00:05:33.744 START TEST accel_rpc 00:05:33.744 ************************************ 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:33.744 * Looking for test storage... 00:05:33.744 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:05:33.744 23:52:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.744 23:52:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=418899 00:05:33.744 23:52:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:33.744 23:52:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 418899 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 418899 ']' 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.744 23:52:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.744 [2024-05-14 23:52:02.986574] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:33.744 [2024-05-14 23:52:02.986660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418899 ] 00:05:33.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.744 [2024-05-14 23:52:03.053421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.001 [2024-05-14 23:52:03.159879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.947 23:52:03 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.947 23:52:03 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:34.947 23:52:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:34.947 23:52:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:34.947 23:52:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:34.947 23:52:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:34.947 23:52:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:34.947 23:52:03 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.947 23:52:03 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.947 23:52:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 ************************************ 00:05:34.947 START TEST accel_assign_opcode 00:05:34.947 ************************************ 00:05:34.947 23:52:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:34.947 23:52:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:34.947 23:52:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.947 23:52:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 [2024-05-14 23:52:03.998491] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 [2024-05-14 23:52:04.006496] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.205 software 00:05:35.205 00:05:35.205 real 0m0.315s 00:05:35.205 user 0m0.041s 00:05:35.205 sys 0m0.009s 00:05:35.205 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.205 23:52:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:35.205 ************************************ 00:05:35.205 END TEST accel_assign_opcode 00:05:35.205 ************************************ 00:05:35.205 23:52:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 418899 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 418899 ']' 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 418899 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 418899 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 418899' 00:05:35.205 killing process with pid 418899 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@965 -- # kill 418899 00:05:35.205 23:52:04 accel_rpc -- common/autotest_common.sh@970 -- # wait 418899 00:05:35.804 00:05:35.804 real 0m1.955s 00:05:35.804 user 0m2.115s 00:05:35.804 sys 0m0.471s 00:05:35.804 23:52:04 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.804 23:52:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.804 ************************************ 00:05:35.804 END TEST accel_rpc 00:05:35.804 ************************************ 00:05:35.804 23:52:04 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.804 23:52:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.804 23:52:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.804 23:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:35.804 ************************************ 00:05:35.804 START TEST app_cmdline 00:05:35.804 ************************************ 00:05:35.804 23:52:04 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.804 * Looking for test storage... 00:05:35.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:35.805 23:52:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.805 23:52:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=419125 00:05:35.805 23:52:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.805 23:52:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 419125 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 419125 ']' 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.805 23:52:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.805 [2024-05-14 23:52:04.996724] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:35.805 [2024-05-14 23:52:04.996814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419125 ] 00:05:35.805 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.805 [2024-05-14 23:52:05.064670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.062 [2024-05-14 23:52:05.174890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.320 23:52:05 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.320 23:52:05 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:36.320 23:52:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:36.577 { 00:05:36.577 "version": "SPDK v24.05-pre git sha1 2260a96a9", 00:05:36.577 "fields": { 00:05:36.577 "major": 24, 00:05:36.577 "minor": 5, 00:05:36.577 "patch": 0, 00:05:36.577 "suffix": "-pre", 00:05:36.577 "commit": "2260a96a9" 00:05:36.577 } 00:05:36.577 } 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.577 23:52:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:36.577 23:52:05 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.835 request: 00:05:36.835 { 00:05:36.835 "method": "env_dpdk_get_mem_stats", 00:05:36.835 "req_id": 1 00:05:36.835 } 00:05:36.835 Got JSON-RPC error response 00:05:36.835 response: 00:05:36.835 { 00:05:36.835 "code": -32601, 00:05:36.835 "message": "Method not found" 00:05:36.835 } 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.835 23:52:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 419125 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 419125 ']' 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 419125 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 419125 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 419125' 00:05:36.835 killing process with pid 419125 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@965 -- # kill 419125 00:05:36.835 23:52:06 app_cmdline -- common/autotest_common.sh@970 -- # wait 419125 00:05:37.402 00:05:37.402 real 0m1.627s 00:05:37.402 user 0m1.996s 00:05:37.402 sys 0m0.470s 00:05:37.402 23:52:06 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.402 23:52:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.402 ************************************ 00:05:37.402 END TEST app_cmdline 00:05:37.402 ************************************ 00:05:37.402 23:52:06 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:37.402 23:52:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.402 23:52:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.402 23:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.402 ************************************ 00:05:37.402 START TEST version 00:05:37.402 ************************************ 00:05:37.402 23:52:06 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:37.402 * Looking for test storage... 00:05:37.402 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:37.402 23:52:06 version -- app/version.sh@17 -- # get_header_version major 00:05:37.402 23:52:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # cut -f2 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.402 23:52:06 version -- app/version.sh@17 -- # major=24 00:05:37.402 23:52:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.402 23:52:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # cut -f2 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.402 23:52:06 version -- app/version.sh@18 -- # minor=5 00:05:37.402 23:52:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.402 23:52:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # cut -f2 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.402 23:52:06 version -- app/version.sh@19 -- # patch=0 00:05:37.402 23:52:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.402 23:52:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # cut -f2 00:05:37.402 23:52:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.402 23:52:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.402 23:52:06 version -- app/version.sh@22 -- # version=24.5 00:05:37.402 23:52:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.402 23:52:06 version -- app/version.sh@28 -- # version=24.5rc0 00:05:37.402 23:52:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:37.402 23:52:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.402 23:52:06 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:37.402 23:52:06 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:37.402 00:05:37.402 real 0m0.103s 00:05:37.402 user 0m0.061s 00:05:37.402 sys 0m0.064s 00:05:37.402 23:52:06 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.402 23:52:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.402 ************************************ 00:05:37.402 END TEST version 00:05:37.402 ************************************ 00:05:37.402 23:52:06 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@194 -- # uname -s 00:05:37.402 23:52:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:37.402 23:52:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.402 23:52:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.402 23:52:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:37.402 23:52:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.402 23:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.402 23:52:06 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:37.402 23:52:06 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:37.402 23:52:06 -- spdk/autotest.sh@279 -- # '[' rdma = rdma ']' 00:05:37.403 23:52:06 -- spdk/autotest.sh@280 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:37.403 23:52:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:37.403 23:52:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.403 23:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.403 ************************************ 00:05:37.403 START TEST nvmf_rdma 00:05:37.403 ************************************ 00:05:37.403 23:52:06 nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:37.661 * Looking for test storage... 00:05:37.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.661 23:52:06 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:37.662 23:52:06 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.662 23:52:06 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.662 23:52:06 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.662 23:52:06 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:05:37.662 23:52:06 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:37.662 23:52:06 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.662 23:52:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:37.662 23:52:06 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:05:37.662 23:52:06 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:37.662 23:52:06 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.662 23:52:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:37.662 ************************************ 00:05:37.662 START TEST nvmf_example 00:05:37.662 ************************************ 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:05:37.662 * Looking for test storage... 00:05:37.662 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:37.662 23:52:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:05:40.195 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:05:40.195 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:40.195 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:05:40.196 Found net devices under 0000:09:00.0: mlx_0_0 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:05:40.196 Found net devices under 0000:09:00.1: mlx_0_1 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:40.196 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.455 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:40.455 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:40.455 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.455 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:40.456 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.456 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:05:40.456 altname enp9s0f0np0 00:05:40.456 inet 192.168.100.8/24 scope global mlx_0_0 00:05:40.456 valid_lft forever preferred_lft forever 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:40.456 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.456 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:05:40.456 altname enp9s0f1np1 00:05:40.456 inet 192.168.100.9/24 scope global mlx_0_1 00:05:40.456 valid_lft forever preferred_lft forever 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:40.456 192.168.100.9' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:40.456 192.168.100.9' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:40.456 192.168.100.9' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=421473 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 421473 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 421473 ']' 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.456 23:52:09 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.390 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.648 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:41.649 23:52:10 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:41.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.864 Initializing NVMe Controllers 00:05:53.864 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:53.864 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:53.864 Initialization complete. Launching workers. 00:05:53.864 ======================================================== 00:05:53.864 Latency(us) 00:05:53.864 Device Information : IOPS MiB/s Average min max 00:05:53.864 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19538.10 76.32 3275.64 826.55 14975.07 00:05:53.864 ======================================================== 00:05:53.864 Total : 19538.10 76.32 3275.64 826.55 14975.07 00:05:53.864 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:05:53.864 rmmod nvme_rdma 00:05:53.864 rmmod nvme_fabrics 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 421473 ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 421473 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 421473 ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 421473 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 421473 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 421473' 00:05:53.864 killing process with pid 421473 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # kill 421473 00:05:53.864 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@970 -- # wait 421473 00:05:53.864 [2024-05-14 23:52:22.312013] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:53.864 nvmf threads initialize successfully 00:05:53.864 bdev subsystem init successfully 00:05:53.864 created a nvmf target service 00:05:53.864 create targets's poll groups done 00:05:53.865 all subsystems of target started 00:05:53.865 nvmf target is running 00:05:53.865 all subsystems of target stopped 00:05:53.865 destroy targets's poll groups done 00:05:53.865 destroyed the nvmf target service 00:05:53.865 bdev subsystem finish successfully 00:05:53.865 nvmf threads destroy successfully 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:53.865 00:05:53.865 real 0m15.747s 00:05:53.865 user 0m51.886s 00:05:53.865 sys 0m2.345s 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.865 23:52:22 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:53.865 ************************************ 00:05:53.865 END TEST nvmf_example 00:05:53.865 ************************************ 00:05:53.865 23:52:22 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:05:53.865 23:52:22 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:53.865 23:52:22 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.865 23:52:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:53.865 ************************************ 00:05:53.865 START TEST nvmf_filesystem 00:05:53.865 ************************************ 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:05:53.865 * Looking for test storage... 00:05:53.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:53.865 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:53.866 #define SPDK_CONFIG_H 00:05:53.866 #define SPDK_CONFIG_APPS 1 00:05:53.866 #define SPDK_CONFIG_ARCH native 00:05:53.866 #undef SPDK_CONFIG_ASAN 00:05:53.866 #undef SPDK_CONFIG_AVAHI 00:05:53.866 #undef SPDK_CONFIG_CET 00:05:53.866 #define SPDK_CONFIG_COVERAGE 1 00:05:53.866 #define SPDK_CONFIG_CROSS_PREFIX 00:05:53.866 #undef SPDK_CONFIG_CRYPTO 00:05:53.866 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:53.866 #undef SPDK_CONFIG_CUSTOMOCF 00:05:53.866 #undef SPDK_CONFIG_DAOS 00:05:53.866 #define SPDK_CONFIG_DAOS_DIR 00:05:53.866 #define SPDK_CONFIG_DEBUG 1 00:05:53.866 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:53.866 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:05:53.866 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:53.866 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:53.866 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:53.866 #undef SPDK_CONFIG_DPDK_UADK 00:05:53.866 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:05:53.866 #define SPDK_CONFIG_EXAMPLES 1 00:05:53.866 #undef SPDK_CONFIG_FC 00:05:53.866 #define SPDK_CONFIG_FC_PATH 00:05:53.866 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:53.866 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:53.866 #undef SPDK_CONFIG_FUSE 00:05:53.866 #undef SPDK_CONFIG_FUZZER 00:05:53.866 #define SPDK_CONFIG_FUZZER_LIB 00:05:53.866 #undef SPDK_CONFIG_GOLANG 00:05:53.866 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:53.866 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:53.866 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:53.866 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:53.866 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:53.866 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:53.866 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:53.866 #define SPDK_CONFIG_IDXD 1 00:05:53.866 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:53.866 #undef SPDK_CONFIG_IPSEC_MB 00:05:53.866 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:53.866 #define SPDK_CONFIG_ISAL 1 00:05:53.866 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:53.866 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:53.866 #define SPDK_CONFIG_LIBDIR 00:05:53.866 #undef SPDK_CONFIG_LTO 00:05:53.866 #define SPDK_CONFIG_MAX_LCORES 00:05:53.866 #define SPDK_CONFIG_NVME_CUSE 1 00:05:53.866 #undef SPDK_CONFIG_OCF 00:05:53.866 #define SPDK_CONFIG_OCF_PATH 00:05:53.866 #define SPDK_CONFIG_OPENSSL_PATH 00:05:53.866 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:53.866 #define SPDK_CONFIG_PGO_DIR 00:05:53.866 #undef SPDK_CONFIG_PGO_USE 00:05:53.866 #define SPDK_CONFIG_PREFIX /usr/local 00:05:53.866 #undef SPDK_CONFIG_RAID5F 00:05:53.866 #undef SPDK_CONFIG_RBD 00:05:53.866 #define SPDK_CONFIG_RDMA 1 00:05:53.866 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:53.866 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:53.866 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:53.866 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:53.866 #define SPDK_CONFIG_SHARED 1 00:05:53.866 #undef SPDK_CONFIG_SMA 00:05:53.866 #define SPDK_CONFIG_TESTS 1 00:05:53.866 #undef SPDK_CONFIG_TSAN 00:05:53.866 #define SPDK_CONFIG_UBLK 1 00:05:53.866 #define SPDK_CONFIG_UBSAN 1 00:05:53.866 #undef SPDK_CONFIG_UNIT_TESTS 00:05:53.866 #undef SPDK_CONFIG_URING 00:05:53.866 #define SPDK_CONFIG_URING_PATH 00:05:53.866 #undef SPDK_CONFIG_URING_ZNS 00:05:53.866 #undef SPDK_CONFIG_USDT 00:05:53.866 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:53.866 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:53.866 #undef SPDK_CONFIG_VFIO_USER 00:05:53.866 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:53.866 #define SPDK_CONFIG_VHOST 1 00:05:53.866 #define SPDK_CONFIG_VIRTIO 1 00:05:53.866 #undef SPDK_CONFIG_VTUNE 00:05:53.866 #define SPDK_CONFIG_VTUNE_DIR 00:05:53.866 #define SPDK_CONFIG_WERROR 1 00:05:53.866 #define SPDK_CONFIG_WPDK_DIR 00:05:53.866 #undef SPDK_CONFIG_XNVME 00:05:53.866 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:05:53.866 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # : rdma 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # : mlx5 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:53.867 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 423122 ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 423122 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.Jy1hG7 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Jy1hG7/tests/target /tmp/spdk.Jy1hG7 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968667136 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4315762688 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=48288501760 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13706227712 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941728768 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389978112 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8970240 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995857408 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1507328 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:05:53.868 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:05:53.869 * Looking for test storage... 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=48288501760 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15920820224 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.869 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:53.869 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:53.870 23:52:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.402 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:05:56.403 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:05:56.403 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:05:56.403 Found net devices under 0000:09:00.0: mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:05:56.403 Found net devices under 0000:09:00.1: mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:56.403 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:56.403 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:05:56.403 altname enp9s0f0np0 00:05:56.403 inet 192.168.100.8/24 scope global mlx_0_0 00:05:56.403 valid_lft forever preferred_lft forever 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:56.403 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:56.403 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:05:56.403 altname enp9s0f1np1 00:05:56.403 inet 192.168.100.9/24 scope global mlx_0_1 00:05:56.403 valid_lft forever preferred_lft forever 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:56.403 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:56.404 192.168.100.9' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:56.404 192.168.100.9' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:56.404 192.168.100.9' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.404 ************************************ 00:05:56.404 START TEST nvmf_filesystem_no_in_capsule 00:05:56.404 ************************************ 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=425077 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 425077 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 425077 ']' 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.404 23:52:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.404 [2024-05-14 23:52:25.415971] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:05:56.404 [2024-05-14 23:52:25.416045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.404 [2024-05-14 23:52:25.489214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.404 [2024-05-14 23:52:25.612747] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.404 [2024-05-14 23:52:25.612805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.404 [2024-05-14 23:52:25.612821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.404 [2024-05-14 23:52:25.612834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.404 [2024-05-14 23:52:25.612845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.404 [2024-05-14 23:52:25.612966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.404 [2024-05-14 23:52:25.613023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.404 [2024-05-14 23:52:25.613020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.404 [2024-05-14 23:52:25.612992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.337 [2024-05-14 23:52:26.415092] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:57.337 [2024-05-14 23:52:26.438590] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13caa20/0x13cef10) succeed. 00:05:57.337 [2024-05-14 23:52:26.449377] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13cc060/0x14105a0) succeed. 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.337 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.594 Malloc1 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.594 [2024-05-14 23:52:26.733840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:57.594 [2024-05-14 23:52:26.734165] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.594 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:05:57.594 { 00:05:57.594 "name": "Malloc1", 00:05:57.594 "aliases": [ 00:05:57.594 "43d0e2d7-f160-46b5-830d-7f53435a7e1f" 00:05:57.594 ], 00:05:57.594 "product_name": "Malloc disk", 00:05:57.594 "block_size": 512, 00:05:57.594 "num_blocks": 1048576, 00:05:57.595 "uuid": "43d0e2d7-f160-46b5-830d-7f53435a7e1f", 00:05:57.595 "assigned_rate_limits": { 00:05:57.595 "rw_ios_per_sec": 0, 00:05:57.595 "rw_mbytes_per_sec": 0, 00:05:57.595 "r_mbytes_per_sec": 0, 00:05:57.595 "w_mbytes_per_sec": 0 00:05:57.595 }, 00:05:57.595 "claimed": true, 00:05:57.595 "claim_type": "exclusive_write", 00:05:57.595 "zoned": false, 00:05:57.595 "supported_io_types": { 00:05:57.595 "read": true, 00:05:57.595 "write": true, 00:05:57.595 "unmap": true, 00:05:57.595 "write_zeroes": true, 00:05:57.595 "flush": true, 00:05:57.595 "reset": true, 00:05:57.595 "compare": false, 00:05:57.595 "compare_and_write": false, 00:05:57.595 "abort": true, 00:05:57.595 "nvme_admin": false, 00:05:57.595 "nvme_io": false 00:05:57.595 }, 00:05:57.595 "memory_domains": [ 00:05:57.595 { 00:05:57.595 "dma_device_id": "system", 00:05:57.595 "dma_device_type": 1 00:05:57.595 }, 00:05:57.595 { 00:05:57.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.595 "dma_device_type": 2 00:05:57.595 } 00:05:57.595 ], 00:05:57.595 "driver_specific": {} 00:05:57.595 } 00:05:57.595 ]' 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:57.595 23:52:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:01.802 23:52:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:01.802 23:52:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:01.802 23:52:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:01.802 23:52:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:01.802 23:52:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:03.706 23:52:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.638 ************************************ 00:06:04.638 START TEST filesystem_ext4 00:06:04.638 ************************************ 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:04.638 23:52:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:04.638 mke2fs 1.46.5 (30-Dec-2021) 00:06:04.896 Discarding device blocks: 0/522240 done 00:06:04.896 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:04.896 Filesystem UUID: 0000d8e1-1207-4cb9-b390-a3d13745dfb7 00:06:04.896 Superblock backups stored on blocks: 00:06:04.896 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:04.896 00:06:04.896 Allocating group tables: 0/64 done 00:06:04.896 Writing inode tables: 0/64 done 00:06:04.896 Creating journal (8192 blocks): done 00:06:04.896 Writing superblocks and filesystem accounting information: 0/64 done 00:06:04.896 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 425077 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:04.896 00:06:04.896 real 0m0.162s 00:06:04.896 user 0m0.013s 00:06:04.896 sys 0m0.031s 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:04.896 ************************************ 00:06:04.896 END TEST filesystem_ext4 00:06:04.896 ************************************ 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.896 ************************************ 00:06:04.896 START TEST filesystem_btrfs 00:06:04.896 ************************************ 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:04.896 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:04.897 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:05.154 btrfs-progs v6.6.2 00:06:05.154 See https://btrfs.readthedocs.io for more information. 00:06:05.154 00:06:05.154 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:05.154 NOTE: several default settings have changed in version 5.15, please make sure 00:06:05.155 this does not affect your deployments: 00:06:05.155 - DUP for metadata (-m dup) 00:06:05.155 - enabled no-holes (-O no-holes) 00:06:05.155 - enabled free-space-tree (-R free-space-tree) 00:06:05.155 00:06:05.155 Label: (null) 00:06:05.155 UUID: dbb4ca6b-4492-4f5b-8f49-8d1b13a4f14d 00:06:05.155 Node size: 16384 00:06:05.155 Sector size: 4096 00:06:05.155 Filesystem size: 510.00MiB 00:06:05.155 Block group profiles: 00:06:05.155 Data: single 8.00MiB 00:06:05.155 Metadata: DUP 32.00MiB 00:06:05.155 System: DUP 8.00MiB 00:06:05.155 SSD detected: yes 00:06:05.155 Zoned device: no 00:06:05.155 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:05.155 Runtime features: free-space-tree 00:06:05.155 Checksum: crc32c 00:06:05.155 Number of devices: 1 00:06:05.155 Devices: 00:06:05.155 ID SIZE PATH 00:06:05.155 1 510.00MiB /dev/nvme0n1p1 00:06:05.155 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 425077 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:05.155 00:06:05.155 real 0m0.165s 00:06:05.155 user 0m0.008s 00:06:05.155 sys 0m0.041s 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:05.155 ************************************ 00:06:05.155 END TEST filesystem_btrfs 00:06:05.155 ************************************ 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.155 ************************************ 00:06:05.155 START TEST filesystem_xfs 00:06:05.155 ************************************ 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:05.155 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:05.155 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:05.155 = sectsz=512 attr=2, projid32bit=1 00:06:05.155 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:05.155 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:05.155 data = bsize=4096 blocks=130560, imaxpct=25 00:06:05.155 = sunit=0 swidth=0 blks 00:06:05.155 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:05.155 log =internal log bsize=4096 blocks=16384, version=2 00:06:05.155 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:05.155 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:05.413 Discarding blocks...Done. 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 425077 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:05.413 00:06:05.413 real 0m0.173s 00:06:05.413 user 0m0.014s 00:06:05.413 sys 0m0.029s 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 ************************************ 00:06:05.413 END TEST filesystem_xfs 00:06:05.413 ************************************ 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:05.413 23:52:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:07.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 425077 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 425077 ']' 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 425077 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 425077 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 425077' 00:06:07.940 killing process with pid 425077 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 425077 00:06:07.940 [2024-05-14 23:52:37.071379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:07.940 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 425077 00:06:07.940 [2024-05-14 23:52:37.128923] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:08.507 00:06:08.507 real 0m12.232s 00:06:08.507 user 0m47.694s 00:06:08.507 sys 0m0.954s 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.507 ************************************ 00:06:08.507 END TEST nvmf_filesystem_no_in_capsule 00:06:08.507 ************************************ 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.507 ************************************ 00:06:08.507 START TEST nvmf_filesystem_in_capsule 00:06:08.507 ************************************ 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=426764 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 426764 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 426764 ']' 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.507 23:52:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.507 [2024-05-14 23:52:37.709131] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:06:08.507 [2024-05-14 23:52:37.709218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.507 [2024-05-14 23:52:37.784050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.766 [2024-05-14 23:52:37.902951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.766 [2024-05-14 23:52:37.903009] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.766 [2024-05-14 23:52:37.903026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.766 [2024-05-14 23:52:37.903039] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.766 [2024-05-14 23:52:37.903051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.766 [2024-05-14 23:52:37.903108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.766 [2024-05-14 23:52:37.903160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.766 [2024-05-14 23:52:37.903193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.766 [2024-05-14 23:52:37.903196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.331 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.589 [2024-05-14 23:52:38.705746] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17aaa20/0x17aef10) succeed. 00:06:09.589 [2024-05-14 23:52:38.716520] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17ac060/0x17f05a0) succeed. 00:06:09.589 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.589 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:09.589 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.589 23:52:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.848 Malloc1 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.848 [2024-05-14 23:52:39.051470] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:09.848 [2024-05-14 23:52:39.051808] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:09.848 { 00:06:09.848 "name": "Malloc1", 00:06:09.848 "aliases": [ 00:06:09.848 "25fe4db1-5177-4476-975f-5699e6d4991b" 00:06:09.848 ], 00:06:09.848 "product_name": "Malloc disk", 00:06:09.848 "block_size": 512, 00:06:09.848 "num_blocks": 1048576, 00:06:09.848 "uuid": "25fe4db1-5177-4476-975f-5699e6d4991b", 00:06:09.848 "assigned_rate_limits": { 00:06:09.848 "rw_ios_per_sec": 0, 00:06:09.848 "rw_mbytes_per_sec": 0, 00:06:09.848 "r_mbytes_per_sec": 0, 00:06:09.848 "w_mbytes_per_sec": 0 00:06:09.848 }, 00:06:09.848 "claimed": true, 00:06:09.848 "claim_type": "exclusive_write", 00:06:09.848 "zoned": false, 00:06:09.848 "supported_io_types": { 00:06:09.848 "read": true, 00:06:09.848 "write": true, 00:06:09.848 "unmap": true, 00:06:09.848 "write_zeroes": true, 00:06:09.848 "flush": true, 00:06:09.848 "reset": true, 00:06:09.848 "compare": false, 00:06:09.848 "compare_and_write": false, 00:06:09.848 "abort": true, 00:06:09.848 "nvme_admin": false, 00:06:09.848 "nvme_io": false 00:06:09.848 }, 00:06:09.848 "memory_domains": [ 00:06:09.848 { 00:06:09.848 "dma_device_id": "system", 00:06:09.848 "dma_device_type": 1 00:06:09.848 }, 00:06:09.848 { 00:06:09.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.848 "dma_device_type": 2 00:06:09.848 } 00:06:09.848 ], 00:06:09.848 "driver_specific": {} 00:06:09.848 } 00:06:09.848 ]' 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:09.848 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:09.849 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:09.849 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:09.849 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:09.849 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:09.849 23:52:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:14.028 23:52:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:14.028 23:52:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:14.028 23:52:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:14.028 23:52:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:14.028 23:52:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:15.925 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:15.925 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:15.926 23:52:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:15.926 23:52:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.860 ************************************ 00:06:16.860 START TEST filesystem_in_capsule_ext4 00:06:16.860 ************************************ 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:16.860 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:16.860 mke2fs 1.46.5 (30-Dec-2021) 00:06:17.119 Discarding device blocks: 0/522240 done 00:06:17.119 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:17.119 Filesystem UUID: 304f8ada-7c4e-4718-aa38-64433684006a 00:06:17.119 Superblock backups stored on blocks: 00:06:17.119 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:17.119 00:06:17.119 Allocating group tables: 0/64 done 00:06:17.119 Writing inode tables: 0/64 done 00:06:17.119 Creating journal (8192 blocks): done 00:06:17.119 Writing superblocks and filesystem accounting information: 0/64 done 00:06:17.119 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 426764 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:17.119 00:06:17.119 real 0m0.151s 00:06:17.119 user 0m0.012s 00:06:17.119 sys 0m0.032s 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:17.119 ************************************ 00:06:17.119 END TEST filesystem_in_capsule_ext4 00:06:17.119 ************************************ 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.119 ************************************ 00:06:17.119 START TEST filesystem_in_capsule_btrfs 00:06:17.119 ************************************ 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:17.119 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:17.378 btrfs-progs v6.6.2 00:06:17.378 See https://btrfs.readthedocs.io for more information. 00:06:17.378 00:06:17.378 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:17.378 NOTE: several default settings have changed in version 5.15, please make sure 00:06:17.378 this does not affect your deployments: 00:06:17.378 - DUP for metadata (-m dup) 00:06:17.378 - enabled no-holes (-O no-holes) 00:06:17.378 - enabled free-space-tree (-R free-space-tree) 00:06:17.378 00:06:17.378 Label: (null) 00:06:17.378 UUID: 4b214abd-b3b8-47b6-977e-146b6c2d59cd 00:06:17.378 Node size: 16384 00:06:17.378 Sector size: 4096 00:06:17.378 Filesystem size: 510.00MiB 00:06:17.378 Block group profiles: 00:06:17.378 Data: single 8.00MiB 00:06:17.378 Metadata: DUP 32.00MiB 00:06:17.378 System: DUP 8.00MiB 00:06:17.378 SSD detected: yes 00:06:17.378 Zoned device: no 00:06:17.378 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:17.378 Runtime features: free-space-tree 00:06:17.378 Checksum: crc32c 00:06:17.378 Number of devices: 1 00:06:17.378 Devices: 00:06:17.378 ID SIZE PATH 00:06:17.378 1 510.00MiB /dev/nvme0n1p1 00:06:17.378 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 426764 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:17.378 00:06:17.378 real 0m0.162s 00:06:17.378 user 0m0.011s 00:06:17.378 sys 0m0.038s 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:17.378 ************************************ 00:06:17.378 END TEST filesystem_in_capsule_btrfs 00:06:17.378 ************************************ 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.378 ************************************ 00:06:17.378 START TEST filesystem_in_capsule_xfs 00:06:17.378 ************************************ 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:17.378 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:17.378 = sectsz=512 attr=2, projid32bit=1 00:06:17.378 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:17.378 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:17.378 data = bsize=4096 blocks=130560, imaxpct=25 00:06:17.378 = sunit=0 swidth=0 blks 00:06:17.378 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:17.378 log =internal log bsize=4096 blocks=16384, version=2 00:06:17.378 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:17.378 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:17.378 Discarding blocks...Done. 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:17.378 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 426764 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:17.636 00:06:17.636 real 0m0.179s 00:06:17.636 user 0m0.016s 00:06:17.636 sys 0m0.025s 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:17.636 ************************************ 00:06:17.636 END TEST filesystem_in_capsule_xfs 00:06:17.636 ************************************ 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:17.636 23:52:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:20.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 426764 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 426764 ']' 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 426764 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 426764 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 426764' 00:06:20.163 killing process with pid 426764 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 426764 00:06:20.163 [2024-05-14 23:52:49.090925] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:20.163 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 426764 00:06:20.163 [2024-05-14 23:52:49.180410] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:20.422 00:06:20.422 real 0m11.980s 00:06:20.422 user 0m46.595s 00:06:20.422 sys 0m0.956s 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 ************************************ 00:06:20.422 END TEST nvmf_filesystem_in_capsule 00:06:20.422 ************************************ 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:20.422 rmmod nvme_rdma 00:06:20.422 rmmod nvme_fabrics 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:20.422 00:06:20.422 real 0m27.066s 00:06:20.422 user 1m35.333s 00:06:20.422 sys 0m3.822s 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.422 23:52:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 ************************************ 00:06:20.422 END TEST nvmf_filesystem 00:06:20.422 ************************************ 00:06:20.422 23:52:49 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:20.422 23:52:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:20.422 23:52:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.422 23:52:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 ************************************ 00:06:20.422 START TEST nvmf_target_discovery 00:06:20.422 ************************************ 00:06:20.422 23:52:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:20.681 * Looking for test storage... 00:06:20.681 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.681 23:52:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.682 23:52:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.682 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:20.682 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:20.682 23:52:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.682 23:52:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:23.262 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:23.262 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:23.262 Found net devices under 0000:09:00.0: mlx_0_0 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:23.262 Found net devices under 0000:09:00.1: mlx_0_1 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:06:23.262 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:23.263 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:23.263 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:23.263 altname enp9s0f0np0 00:06:23.263 inet 192.168.100.8/24 scope global mlx_0_0 00:06:23.263 valid_lft forever preferred_lft forever 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:23.263 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:23.263 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:23.263 altname enp9s0f1np1 00:06:23.263 inet 192.168.100.9/24 scope global mlx_0_1 00:06:23.263 valid_lft forever preferred_lft forever 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:23.263 192.168.100.9' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:23.263 192.168.100.9' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:23.263 192.168.100.9' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=430397 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 430397 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 430397 ']' 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.263 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.263 [2024-05-14 23:52:52.478970] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:06:23.263 [2024-05-14 23:52:52.479051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.263 [2024-05-14 23:52:52.557206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.522 [2024-05-14 23:52:52.678293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.522 [2024-05-14 23:52:52.678357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.522 [2024-05-14 23:52:52.678373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.522 [2024-05-14 23:52:52.678386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.522 [2024-05-14 23:52:52.678397] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.522 [2024-05-14 23:52:52.678477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.522 [2024-05-14 23:52:52.678532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.522 [2024-05-14 23:52:52.678565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.522 [2024-05-14 23:52:52.678567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.522 23:52:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.522 [2024-05-14 23:52:52.861325] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa7a20/0x1aabf10) succeed. 00:06:23.780 [2024-05-14 23:52:52.872378] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa9060/0x1aed5a0) succeed. 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 Null1 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 [2024-05-14 23:52:53.050245] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:23.780 [2024-05-14 23:52:53.050555] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 Null2 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 Null3 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:23.780 Null4 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.780 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.039 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 4420 00:06:24.039 00:06:24.039 Discovery Log Number of Records 6, Generation counter 6 00:06:24.039 =====Discovery Log Entry 0====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: current discovery subsystem 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4420 00:06:24.039 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: explicit discovery connections, duplicate discovery information 00:06:24.039 rdma_prtype: not specified 00:06:24.039 rdma_qptype: connected 00:06:24.039 rdma_cms: rdma-cm 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 =====Discovery Log Entry 1====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: nvme subsystem 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4420 00:06:24.039 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: none 00:06:24.039 rdma_prtype: not specified 00:06:24.039 rdma_qptype: connected 00:06:24.039 rdma_cms: rdma-cm 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 =====Discovery Log Entry 2====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: nvme subsystem 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4420 00:06:24.039 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: none 00:06:24.039 rdma_prtype: not specified 00:06:24.039 rdma_qptype: connected 00:06:24.039 rdma_cms: rdma-cm 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 =====Discovery Log Entry 3====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: nvme subsystem 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4420 00:06:24.039 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: none 00:06:24.039 rdma_prtype: not specified 00:06:24.039 rdma_qptype: connected 00:06:24.039 rdma_cms: rdma-cm 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 =====Discovery Log Entry 4====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: nvme subsystem 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4420 00:06:24.039 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: none 00:06:24.039 rdma_prtype: not specified 00:06:24.039 rdma_qptype: connected 00:06:24.039 rdma_cms: rdma-cm 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 =====Discovery Log Entry 5====== 00:06:24.039 trtype: rdma 00:06:24.039 adrfam: ipv4 00:06:24.039 subtype: discovery subsystem referral 00:06:24.039 treq: not required 00:06:24.039 portid: 0 00:06:24.039 trsvcid: 4430 00:06:24.039 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:24.039 traddr: 192.168.100.8 00:06:24.039 eflags: none 00:06:24.039 rdma_prtype: unrecognized 00:06:24.039 rdma_qptype: unrecognized 00:06:24.039 rdma_cms: unrecognized 00:06:24.039 rdma_pkey: 0x0000 00:06:24.039 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:24.039 Perform nvmf subsystem discovery via RPC 00:06:24.039 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:24.039 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.039 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.039 [ 00:06:24.039 { 00:06:24.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:24.039 "subtype": "Discovery", 00:06:24.039 "listen_addresses": [ 00:06:24.039 { 00:06:24.039 "trtype": "RDMA", 00:06:24.039 "adrfam": "IPv4", 00:06:24.039 "traddr": "192.168.100.8", 00:06:24.039 "trsvcid": "4420" 00:06:24.039 } 00:06:24.039 ], 00:06:24.039 "allow_any_host": true, 00:06:24.039 "hosts": [] 00:06:24.039 }, 00:06:24.039 { 00:06:24.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:24.039 "subtype": "NVMe", 00:06:24.039 "listen_addresses": [ 00:06:24.039 { 00:06:24.039 "trtype": "RDMA", 00:06:24.039 "adrfam": "IPv4", 00:06:24.039 "traddr": "192.168.100.8", 00:06:24.039 "trsvcid": "4420" 00:06:24.039 } 00:06:24.039 ], 00:06:24.039 "allow_any_host": true, 00:06:24.039 "hosts": [], 00:06:24.039 "serial_number": "SPDK00000000000001", 00:06:24.039 "model_number": "SPDK bdev Controller", 00:06:24.039 "max_namespaces": 32, 00:06:24.039 "min_cntlid": 1, 00:06:24.039 "max_cntlid": 65519, 00:06:24.039 "namespaces": [ 00:06:24.039 { 00:06:24.039 "nsid": 1, 00:06:24.039 "bdev_name": "Null1", 00:06:24.039 "name": "Null1", 00:06:24.039 "nguid": "5D2D33FFD5B9431BB5D398F28848D903", 00:06:24.039 "uuid": "5d2d33ff-d5b9-431b-b5d3-98f28848d903" 00:06:24.039 } 00:06:24.039 ] 00:06:24.039 }, 00:06:24.039 { 00:06:24.039 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:24.039 "subtype": "NVMe", 00:06:24.039 "listen_addresses": [ 00:06:24.039 { 00:06:24.039 "trtype": "RDMA", 00:06:24.039 "adrfam": "IPv4", 00:06:24.039 "traddr": "192.168.100.8", 00:06:24.039 "trsvcid": "4420" 00:06:24.039 } 00:06:24.039 ], 00:06:24.039 "allow_any_host": true, 00:06:24.039 "hosts": [], 00:06:24.039 "serial_number": "SPDK00000000000002", 00:06:24.039 "model_number": "SPDK bdev Controller", 00:06:24.039 "max_namespaces": 32, 00:06:24.039 "min_cntlid": 1, 00:06:24.039 "max_cntlid": 65519, 00:06:24.039 "namespaces": [ 00:06:24.039 { 00:06:24.039 "nsid": 1, 00:06:24.039 "bdev_name": "Null2", 00:06:24.039 "name": "Null2", 00:06:24.039 "nguid": "E49EF903F38748B1BCDF7D0AF968C857", 00:06:24.039 "uuid": "e49ef903-f387-48b1-bcdf-7d0af968c857" 00:06:24.039 } 00:06:24.039 ] 00:06:24.040 }, 00:06:24.040 { 00:06:24.040 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:24.040 "subtype": "NVMe", 00:06:24.040 "listen_addresses": [ 00:06:24.040 { 00:06:24.040 "trtype": "RDMA", 00:06:24.040 "adrfam": "IPv4", 00:06:24.040 "traddr": "192.168.100.8", 00:06:24.040 "trsvcid": "4420" 00:06:24.040 } 00:06:24.040 ], 00:06:24.040 "allow_any_host": true, 00:06:24.040 "hosts": [], 00:06:24.040 "serial_number": "SPDK00000000000003", 00:06:24.040 "model_number": "SPDK bdev Controller", 00:06:24.040 "max_namespaces": 32, 00:06:24.040 "min_cntlid": 1, 00:06:24.040 "max_cntlid": 65519, 00:06:24.040 "namespaces": [ 00:06:24.040 { 00:06:24.040 "nsid": 1, 00:06:24.040 "bdev_name": "Null3", 00:06:24.040 "name": "Null3", 00:06:24.040 "nguid": "77884DAFAE4D47CC92CC575BA81FE088", 00:06:24.040 "uuid": "77884daf-ae4d-47cc-92cc-575ba81fe088" 00:06:24.040 } 00:06:24.040 ] 00:06:24.040 }, 00:06:24.040 { 00:06:24.040 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:24.040 "subtype": "NVMe", 00:06:24.040 "listen_addresses": [ 00:06:24.040 { 00:06:24.040 "trtype": "RDMA", 00:06:24.040 "adrfam": "IPv4", 00:06:24.040 "traddr": "192.168.100.8", 00:06:24.040 "trsvcid": "4420" 00:06:24.040 } 00:06:24.040 ], 00:06:24.040 "allow_any_host": true, 00:06:24.040 "hosts": [], 00:06:24.040 "serial_number": "SPDK00000000000004", 00:06:24.040 "model_number": "SPDK bdev Controller", 00:06:24.040 "max_namespaces": 32, 00:06:24.040 "min_cntlid": 1, 00:06:24.040 "max_cntlid": 65519, 00:06:24.040 "namespaces": [ 00:06:24.040 { 00:06:24.040 "nsid": 1, 00:06:24.040 "bdev_name": "Null4", 00:06:24.040 "name": "Null4", 00:06:24.040 "nguid": "D25755800BF9449B8FE9F60B5F31668C", 00:06:24.040 "uuid": "d2575580-0bf9-449b-8fe9-f60b5f31668c" 00:06:24.040 } 00:06:24.040 ] 00:06:24.040 } 00:06:24.040 ] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.040 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:24.041 rmmod nvme_rdma 00:06:24.298 rmmod nvme_fabrics 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 430397 ']' 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 430397 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 430397 ']' 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 430397 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 430397 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 430397' 00:06:24.298 killing process with pid 430397 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 430397 00:06:24.298 [2024-05-14 23:52:53.448593] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:24.298 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 430397 00:06:24.298 [2024-05-14 23:52:53.537598] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:06:24.558 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:24.558 23:52:53 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:24.558 00:06:24.558 real 0m4.055s 00:06:24.558 user 0m5.242s 00:06:24.558 sys 0m2.177s 00:06:24.558 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.558 23:52:53 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.558 ************************************ 00:06:24.558 END TEST nvmf_target_discovery 00:06:24.558 ************************************ 00:06:24.558 23:52:53 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:24.558 23:52:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:24.558 23:52:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.558 23:52:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:24.558 ************************************ 00:06:24.558 START TEST nvmf_referrals 00:06:24.558 ************************************ 00:06:24.558 23:52:53 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:24.817 * Looking for test storage... 00:06:24.817 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:24.817 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:24.818 23:52:53 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:27.356 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:27.356 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:27.356 Found net devices under 0000:09:00.0: mlx_0_0 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.356 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:27.357 Found net devices under 0000:09:00.1: mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:27.357 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.357 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:27.357 altname enp9s0f0np0 00:06:27.357 inet 192.168.100.8/24 scope global mlx_0_0 00:06:27.357 valid_lft forever preferred_lft forever 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:27.357 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.357 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:27.357 altname enp9s0f1np1 00:06:27.357 inet 192.168.100.9/24 scope global mlx_0_1 00:06:27.357 valid_lft forever preferred_lft forever 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:27.357 192.168.100.9' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:27.357 192.168.100.9' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:27.357 192.168.100.9' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=432642 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 432642 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 432642 ']' 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.357 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.358 23:52:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.358 [2024-05-14 23:52:56.628084] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:06:27.358 [2024-05-14 23:52:56.628167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.358 [2024-05-14 23:52:56.701742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.616 [2024-05-14 23:52:56.822316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.616 [2024-05-14 23:52:56.822384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.616 [2024-05-14 23:52:56.822400] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.616 [2024-05-14 23:52:56.822414] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.616 [2024-05-14 23:52:56.822425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.616 [2024-05-14 23:52:56.822516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.616 [2024-05-14 23:52:56.822574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.616 [2024-05-14 23:52:56.822608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.616 [2024-05-14 23:52:56.822610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 [2024-05-14 23:52:57.656877] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19fda20/0x1a01f10) succeed. 00:06:28.551 [2024-05-14 23:52:57.667517] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19ff060/0x1a435a0) succeed. 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 [2024-05-14 23:52:57.809277] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:28.551 [2024-05-14 23:52:57.809637] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:28.551 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.809 23:52:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.809 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:28.810 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:29.068 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:29.327 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:29.585 rmmod nvme_rdma 00:06:29.585 rmmod nvme_fabrics 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 432642 ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 432642 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 432642 ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 432642 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 432642 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 432642' 00:06:29.585 killing process with pid 432642 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 432642 00:06:29.585 [2024-05-14 23:52:58.778035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:29.585 23:52:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 432642 00:06:29.585 [2024-05-14 23:52:58.863955] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:06:29.844 23:52:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:29.844 23:52:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:29.844 00:06:29.844 real 0m5.254s 00:06:29.844 user 0m10.746s 00:06:29.844 sys 0m2.343s 00:06:29.844 23:52:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.844 23:52:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:29.844 ************************************ 00:06:29.844 END TEST nvmf_referrals 00:06:29.844 ************************************ 00:06:29.844 23:52:59 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:06:29.844 23:52:59 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:29.844 23:52:59 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.844 23:52:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:29.844 ************************************ 00:06:29.844 START TEST nvmf_connect_disconnect 00:06:29.844 ************************************ 00:06:29.844 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:06:30.104 * Looking for test storage... 00:06:30.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:30.104 23:52:59 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:32.636 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:32.636 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:32.636 Found net devices under 0000:09:00.0: mlx_0_0 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:32.636 Found net devices under 0000:09:00.1: mlx_0_1 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:32.636 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:32.636 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:32.636 altname enp9s0f0np0 00:06:32.636 inet 192.168.100.8/24 scope global mlx_0_0 00:06:32.636 valid_lft forever preferred_lft forever 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:32.636 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:32.637 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:32.637 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:32.637 altname enp9s0f1np1 00:06:32.637 inet 192.168.100.9/24 scope global mlx_0_1 00:06:32.637 valid_lft forever preferred_lft forever 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:32.637 192.168.100.9' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:32.637 192.168.100.9' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:32.637 192.168.100.9' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=434965 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 434965 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 434965 ']' 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.637 23:53:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.896 [2024-05-14 23:53:01.982599] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:06:32.896 [2024-05-14 23:53:01.982689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.896 [2024-05-14 23:53:02.054115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.896 [2024-05-14 23:53:02.165244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.896 [2024-05-14 23:53:02.165317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.896 [2024-05-14 23:53:02.165351] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.896 [2024-05-14 23:53:02.165363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.896 [2024-05-14 23:53:02.165373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.896 [2024-05-14 23:53:02.165469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.896 [2024-05-14 23:53:02.165541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.896 [2024-05-14 23:53:02.165568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.896 [2024-05-14 23:53:02.165570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.153 [2024-05-14 23:53:02.327762] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:33.153 [2024-05-14 23:53:02.350849] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1248a20/0x124cf10) succeed. 00:06:33.153 [2024-05-14 23:53:02.361404] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x124a060/0x128e5a0) succeed. 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.153 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:33.410 [2024-05-14 23:53:02.516044] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:33.410 [2024-05-14 23:53:02.516374] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:33.410 23:53:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:41.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:49.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:12.406 rmmod nvme_rdma 00:07:12.406 rmmod nvme_fabrics 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 434965 ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 434965 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 434965 ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 434965 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 434965 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 434965' 00:07:12.406 killing process with pid 434965 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 434965 00:07:12.406 [2024-05-14 23:53:41.209354] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 434965 00:07:12.406 [2024-05-14 23:53:41.264874] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:12.406 00:07:12.406 real 0m42.369s 00:07:12.406 user 2m37.550s 00:07:12.406 sys 0m3.087s 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.406 23:53:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.406 ************************************ 00:07:12.406 END TEST nvmf_connect_disconnect 00:07:12.406 ************************************ 00:07:12.407 23:53:41 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:12.407 23:53:41 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.407 23:53:41 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.407 23:53:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:12.407 ************************************ 00:07:12.407 START TEST nvmf_multitarget 00:07:12.407 ************************************ 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:12.407 * Looking for test storage... 00:07:12.407 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.407 23:53:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:07:14.941 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:07:14.941 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:07:14.941 Found net devices under 0000:09:00.0: mlx_0_0 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:07:14.941 Found net devices under 0000:09:00.1: mlx_0_1 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:14.941 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:15.200 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.200 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:07:15.200 altname enp9s0f0np0 00:07:15.200 inet 192.168.100.8/24 scope global mlx_0_0 00:07:15.200 valid_lft forever preferred_lft forever 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:15.200 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:15.201 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.201 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:07:15.201 altname enp9s0f1np1 00:07:15.201 inet 192.168.100.9/24 scope global mlx_0_1 00:07:15.201 valid_lft forever preferred_lft forever 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.201 192.168.100.9' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:15.201 192.168.100.9' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:15.201 192.168.100.9' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=441798 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 441798 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 441798 ']' 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.201 23:53:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:15.201 [2024-05-14 23:53:44.410549] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:07:15.201 [2024-05-14 23:53:44.410627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.201 [2024-05-14 23:53:44.486053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.459 [2024-05-14 23:53:44.606895] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.459 [2024-05-14 23:53:44.606972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.459 [2024-05-14 23:53:44.606998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.459 [2024-05-14 23:53:44.607012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.459 [2024-05-14 23:53:44.607024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.459 [2024-05-14 23:53:44.607107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.459 [2024-05-14 23:53:44.607161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.459 [2024-05-14 23:53:44.607194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.459 [2024-05-14 23:53:44.607197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.025 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:16.282 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:16.282 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:16.282 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:16.282 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:16.282 "nvmf_tgt_1" 00:07:16.282 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:16.540 "nvmf_tgt_2" 00:07:16.540 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:16.540 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:16.540 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:16.540 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:16.797 true 00:07:16.797 23:53:45 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:16.797 true 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:16.797 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:16.798 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.798 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:16.798 rmmod nvme_rdma 00:07:17.057 rmmod nvme_fabrics 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 441798 ']' 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 441798 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 441798 ']' 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 441798 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 441798 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 441798' 00:07:17.057 killing process with pid 441798 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 441798 00:07:17.057 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 441798 00:07:17.315 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.315 23:53:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:17.315 00:07:17.315 real 0m4.875s 00:07:17.315 user 0m9.254s 00:07:17.315 sys 0m2.295s 00:07:17.315 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.315 23:53:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:17.315 ************************************ 00:07:17.315 END TEST nvmf_multitarget 00:07:17.315 ************************************ 00:07:17.315 23:53:46 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:07:17.315 23:53:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:17.315 23:53:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.315 23:53:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:17.315 ************************************ 00:07:17.315 START TEST nvmf_rpc 00:07:17.315 ************************************ 00:07:17.315 23:53:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:07:17.315 * Looking for test storage... 00:07:17.315 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.315 23:53:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.315 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:17.315 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.315 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.316 23:53:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:07:19.887 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:07:19.887 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.887 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:07:19.888 Found net devices under 0000:09:00.0: mlx_0_0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:07:19.888 Found net devices under 0000:09:00.1: mlx_0_1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:19.888 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:19.888 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:07:19.888 altname enp9s0f0np0 00:07:19.888 inet 192.168.100.8/24 scope global mlx_0_0 00:07:19.888 valid_lft forever preferred_lft forever 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:19.888 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:19.888 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:07:19.888 altname enp9s0f1np1 00:07:19.888 inet 192.168.100.9/24 scope global mlx_0_1 00:07:19.888 valid_lft forever preferred_lft forever 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:19.888 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:20.147 192.168.100.9' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:20.147 192.168.100.9' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:20.147 192.168.100.9' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=444128 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 444128 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 444128 ']' 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.147 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 [2024-05-14 23:53:49.334418] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:07:20.148 [2024-05-14 23:53:49.334492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.148 [2024-05-14 23:53:49.404428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.406 [2024-05-14 23:53:49.517390] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.406 [2024-05-14 23:53:49.517442] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.406 [2024-05-14 23:53:49.517470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.406 [2024-05-14 23:53:49.517481] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.406 [2024-05-14 23:53:49.517490] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.406 [2024-05-14 23:53:49.517540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.406 [2024-05-14 23:53:49.517600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.406 [2024-05-14 23:53:49.517629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.406 [2024-05-14 23:53:49.517630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:20.406 "tick_rate": 2700000000, 00:07:20.406 "poll_groups": [ 00:07:20.406 { 00:07:20.406 "name": "nvmf_tgt_poll_group_000", 00:07:20.406 "admin_qpairs": 0, 00:07:20.406 "io_qpairs": 0, 00:07:20.406 "current_admin_qpairs": 0, 00:07:20.406 "current_io_qpairs": 0, 00:07:20.406 "pending_bdev_io": 0, 00:07:20.406 "completed_nvme_io": 0, 00:07:20.406 "transports": [] 00:07:20.406 }, 00:07:20.406 { 00:07:20.406 "name": "nvmf_tgt_poll_group_001", 00:07:20.406 "admin_qpairs": 0, 00:07:20.406 "io_qpairs": 0, 00:07:20.406 "current_admin_qpairs": 0, 00:07:20.406 "current_io_qpairs": 0, 00:07:20.406 "pending_bdev_io": 0, 00:07:20.406 "completed_nvme_io": 0, 00:07:20.406 "transports": [] 00:07:20.406 }, 00:07:20.406 { 00:07:20.406 "name": "nvmf_tgt_poll_group_002", 00:07:20.406 "admin_qpairs": 0, 00:07:20.406 "io_qpairs": 0, 00:07:20.406 "current_admin_qpairs": 0, 00:07:20.406 "current_io_qpairs": 0, 00:07:20.406 "pending_bdev_io": 0, 00:07:20.406 "completed_nvme_io": 0, 00:07:20.406 "transports": [] 00:07:20.406 }, 00:07:20.406 { 00:07:20.406 "name": "nvmf_tgt_poll_group_003", 00:07:20.406 "admin_qpairs": 0, 00:07:20.406 "io_qpairs": 0, 00:07:20.406 "current_admin_qpairs": 0, 00:07:20.406 "current_io_qpairs": 0, 00:07:20.406 "pending_bdev_io": 0, 00:07:20.406 "completed_nvme_io": 0, 00:07:20.406 "transports": [] 00:07:20.406 } 00:07:20.406 ] 00:07:20.406 }' 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:20.406 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.665 [2024-05-14 23:53:49.802867] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aeda20/0x1af1f10) succeed. 00:07:20.665 [2024-05-14 23:53:49.813582] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aef060/0x1b335a0) succeed. 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:20.665 "tick_rate": 2700000000, 00:07:20.665 "poll_groups": [ 00:07:20.665 { 00:07:20.665 "name": "nvmf_tgt_poll_group_000", 00:07:20.665 "admin_qpairs": 0, 00:07:20.665 "io_qpairs": 0, 00:07:20.665 "current_admin_qpairs": 0, 00:07:20.665 "current_io_qpairs": 0, 00:07:20.665 "pending_bdev_io": 0, 00:07:20.665 "completed_nvme_io": 0, 00:07:20.665 "transports": [ 00:07:20.665 { 00:07:20.665 "trtype": "RDMA", 00:07:20.665 "pending_data_buffer": 0, 00:07:20.665 "devices": [ 00:07:20.665 { 00:07:20.665 "name": "mlx5_0", 00:07:20.665 "polls": 20869, 00:07:20.665 "idle_polls": 20869, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "mlx5_1", 00:07:20.665 "polls": 20869, 00:07:20.665 "idle_polls": 20869, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "nvmf_tgt_poll_group_001", 00:07:20.665 "admin_qpairs": 0, 00:07:20.665 "io_qpairs": 0, 00:07:20.665 "current_admin_qpairs": 0, 00:07:20.665 "current_io_qpairs": 0, 00:07:20.665 "pending_bdev_io": 0, 00:07:20.665 "completed_nvme_io": 0, 00:07:20.665 "transports": [ 00:07:20.665 { 00:07:20.665 "trtype": "RDMA", 00:07:20.665 "pending_data_buffer": 0, 00:07:20.665 "devices": [ 00:07:20.665 { 00:07:20.665 "name": "mlx5_0", 00:07:20.665 "polls": 13426, 00:07:20.665 "idle_polls": 13426, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "mlx5_1", 00:07:20.665 "polls": 13426, 00:07:20.665 "idle_polls": 13426, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "nvmf_tgt_poll_group_002", 00:07:20.665 "admin_qpairs": 0, 00:07:20.665 "io_qpairs": 0, 00:07:20.665 "current_admin_qpairs": 0, 00:07:20.665 "current_io_qpairs": 0, 00:07:20.665 "pending_bdev_io": 0, 00:07:20.665 "completed_nvme_io": 0, 00:07:20.665 "transports": [ 00:07:20.665 { 00:07:20.665 "trtype": "RDMA", 00:07:20.665 "pending_data_buffer": 0, 00:07:20.665 "devices": [ 00:07:20.665 { 00:07:20.665 "name": "mlx5_0", 00:07:20.665 "polls": 6756, 00:07:20.665 "idle_polls": 6756, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "mlx5_1", 00:07:20.665 "polls": 6756, 00:07:20.665 "idle_polls": 6756, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "nvmf_tgt_poll_group_003", 00:07:20.665 "admin_qpairs": 0, 00:07:20.665 "io_qpairs": 0, 00:07:20.665 "current_admin_qpairs": 0, 00:07:20.665 "current_io_qpairs": 0, 00:07:20.665 "pending_bdev_io": 0, 00:07:20.665 "completed_nvme_io": 0, 00:07:20.665 "transports": [ 00:07:20.665 { 00:07:20.665 "trtype": "RDMA", 00:07:20.665 "pending_data_buffer": 0, 00:07:20.665 "devices": [ 00:07:20.665 { 00:07:20.665 "name": "mlx5_0", 00:07:20.665 "polls": 535, 00:07:20.665 "idle_polls": 535, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 }, 00:07:20.665 { 00:07:20.665 "name": "mlx5_1", 00:07:20.665 "polls": 535, 00:07:20.665 "idle_polls": 535, 00:07:20.665 "completions": 0, 00:07:20.665 "requests": 0, 00:07:20.665 "request_latency": 0, 00:07:20.665 "pending_free_request": 0, 00:07:20.665 "pending_rdma_read": 0, 00:07:20.665 "pending_rdma_write": 0, 00:07:20.665 "pending_rdma_send": 0, 00:07:20.665 "total_send_wrs": 0, 00:07:20.665 "send_doorbell_updates": 0, 00:07:20.665 "total_recv_wrs": 4096, 00:07:20.665 "recv_doorbell_updates": 1 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 } 00:07:20.665 ] 00:07:20.665 }' 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:20.665 23:53:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 Malloc1 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 [2024-05-14 23:53:50.241074] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:20.925 [2024-05-14 23:53:50.241416] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:20.925 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:20.925 [2024-05-14 23:53:50.271306] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:21.184 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:21.184 could not add new controller: failed to write to nvme-fabrics device 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.184 23:53:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:24.477 23:53:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.477 23:53:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:24.477 23:53:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.477 23:53:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:24.477 23:53:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:26.375 23:53:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:28.901 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:28.902 23:53:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:28.902 [2024-05-14 23:53:58.034401] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:28.902 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:28.902 could not add new controller: failed to write to nvme-fabrics device 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.902 23:53:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:32.181 23:54:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.181 23:54:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:32.181 23:54:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.181 23:54:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:32.181 23:54:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:34.079 23:54:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.604 [2024-05-14 23:54:05.724995] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.604 23:54:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.605 23:54:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:39.882 23:54:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.882 23:54:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:39.882 23:54:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.882 23:54:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:39.882 23:54:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:41.819 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:41.819 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:41.819 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.076 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:42.076 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.076 23:54:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:42.076 23:54:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 [2024-05-14 23:54:13.486831] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.602 23:54:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:47.880 23:54:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.880 23:54:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:47.880 23:54:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.880 23:54:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:47.880 23:54:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:49.792 23:54:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:52.318 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 [2024-05-14 23:54:21.393948] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.319 23:54:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:56.505 23:54:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.505 23:54:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:56.505 23:54:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.505 23:54:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:56.505 23:54:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:57.878 23:54:27 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 [2024-05-14 23:54:29.487770] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.404 23:54:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:04.641 23:54:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.641 23:54:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:04.641 23:54:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.641 23:54:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:04.641 23:54:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:06.014 23:54:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 [2024-05-14 23:54:37.594157] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.541 23:54:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:12.717 23:54:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.717 23:54:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:12.717 23:54:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.717 23:54:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:12.717 23:54:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:14.085 23:54:43 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 [2024-05-14 23:54:45.515259] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 [2024-05-14 23:54:45.567358] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 [2024-05-14 23:54:45.615840] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.608 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 [2024-05-14 23:54:45.664314] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 [2024-05-14 23:54:45.712799] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.609 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:16.609 "tick_rate": 2700000000, 00:08:16.609 "poll_groups": [ 00:08:16.609 { 00:08:16.609 "name": "nvmf_tgt_poll_group_000", 00:08:16.609 "admin_qpairs": 2, 00:08:16.609 "io_qpairs": 27, 00:08:16.609 "current_admin_qpairs": 0, 00:08:16.609 "current_io_qpairs": 0, 00:08:16.609 "pending_bdev_io": 0, 00:08:16.609 "completed_nvme_io": 175, 00:08:16.609 "transports": [ 00:08:16.609 { 00:08:16.609 "trtype": "RDMA", 00:08:16.609 "pending_data_buffer": 0, 00:08:16.609 "devices": [ 00:08:16.609 { 00:08:16.609 "name": "mlx5_0", 00:08:16.609 "polls": 7226400, 00:08:16.609 "idle_polls": 7225977, 00:08:16.609 "completions": 485, 00:08:16.609 "requests": 242, 00:08:16.609 "request_latency": 76510725, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.609 "total_send_wrs": 425, 00:08:16.609 "send_doorbell_updates": 208, 00:08:16.609 "total_recv_wrs": 4338, 00:08:16.609 "recv_doorbell_updates": 208 00:08:16.609 }, 00:08:16.609 { 00:08:16.609 "name": "mlx5_1", 00:08:16.609 "polls": 7226400, 00:08:16.609 "idle_polls": 7226400, 00:08:16.609 "completions": 0, 00:08:16.609 "requests": 0, 00:08:16.609 "request_latency": 0, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.609 "total_send_wrs": 0, 00:08:16.609 "send_doorbell_updates": 0, 00:08:16.609 "total_recv_wrs": 4096, 00:08:16.609 "recv_doorbell_updates": 1 00:08:16.609 } 00:08:16.609 ] 00:08:16.609 } 00:08:16.609 ] 00:08:16.609 }, 00:08:16.609 { 00:08:16.609 "name": "nvmf_tgt_poll_group_001", 00:08:16.609 "admin_qpairs": 2, 00:08:16.609 "io_qpairs": 26, 00:08:16.609 "current_admin_qpairs": 0, 00:08:16.609 "current_io_qpairs": 0, 00:08:16.609 "pending_bdev_io": 0, 00:08:16.609 "completed_nvme_io": 27, 00:08:16.609 "transports": [ 00:08:16.609 { 00:08:16.609 "trtype": "RDMA", 00:08:16.609 "pending_data_buffer": 0, 00:08:16.609 "devices": [ 00:08:16.609 { 00:08:16.609 "name": "mlx5_0", 00:08:16.609 "polls": 7474499, 00:08:16.609 "idle_polls": 7474315, 00:08:16.609 "completions": 184, 00:08:16.609 "requests": 92, 00:08:16.609 "request_latency": 12992910, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.609 "total_send_wrs": 126, 00:08:16.609 "send_doorbell_updates": 92, 00:08:16.609 "total_recv_wrs": 4188, 00:08:16.609 "recv_doorbell_updates": 93 00:08:16.609 }, 00:08:16.609 { 00:08:16.609 "name": "mlx5_1", 00:08:16.609 "polls": 7474499, 00:08:16.609 "idle_polls": 7474499, 00:08:16.609 "completions": 0, 00:08:16.609 "requests": 0, 00:08:16.609 "request_latency": 0, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.609 "total_send_wrs": 0, 00:08:16.609 "send_doorbell_updates": 0, 00:08:16.609 "total_recv_wrs": 4096, 00:08:16.609 "recv_doorbell_updates": 1 00:08:16.609 } 00:08:16.609 ] 00:08:16.609 } 00:08:16.609 ] 00:08:16.609 }, 00:08:16.609 { 00:08:16.609 "name": "nvmf_tgt_poll_group_002", 00:08:16.609 "admin_qpairs": 1, 00:08:16.609 "io_qpairs": 26, 00:08:16.609 "current_admin_qpairs": 0, 00:08:16.609 "current_io_qpairs": 0, 00:08:16.609 "pending_bdev_io": 0, 00:08:16.609 "completed_nvme_io": 99, 00:08:16.609 "transports": [ 00:08:16.609 { 00:08:16.609 "trtype": "RDMA", 00:08:16.609 "pending_data_buffer": 0, 00:08:16.609 "devices": [ 00:08:16.609 { 00:08:16.609 "name": "mlx5_0", 00:08:16.609 "polls": 7391783, 00:08:16.609 "idle_polls": 7391539, 00:08:16.609 "completions": 265, 00:08:16.609 "requests": 132, 00:08:16.609 "request_latency": 34976844, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.609 "total_send_wrs": 223, 00:08:16.609 "send_doorbell_updates": 122, 00:08:16.609 "total_recv_wrs": 4228, 00:08:16.609 "recv_doorbell_updates": 122 00:08:16.609 }, 00:08:16.609 { 00:08:16.609 "name": "mlx5_1", 00:08:16.609 "polls": 7391783, 00:08:16.609 "idle_polls": 7391783, 00:08:16.609 "completions": 0, 00:08:16.609 "requests": 0, 00:08:16.609 "request_latency": 0, 00:08:16.609 "pending_free_request": 0, 00:08:16.609 "pending_rdma_read": 0, 00:08:16.609 "pending_rdma_write": 0, 00:08:16.609 "pending_rdma_send": 0, 00:08:16.610 "total_send_wrs": 0, 00:08:16.610 "send_doorbell_updates": 0, 00:08:16.610 "total_recv_wrs": 4096, 00:08:16.610 "recv_doorbell_updates": 1 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "name": "nvmf_tgt_poll_group_003", 00:08:16.610 "admin_qpairs": 2, 00:08:16.610 "io_qpairs": 26, 00:08:16.610 "current_admin_qpairs": 0, 00:08:16.610 "current_io_qpairs": 0, 00:08:16.610 "pending_bdev_io": 0, 00:08:16.610 "completed_nvme_io": 154, 00:08:16.610 "transports": [ 00:08:16.610 { 00:08:16.610 "trtype": "RDMA", 00:08:16.610 "pending_data_buffer": 0, 00:08:16.610 "devices": [ 00:08:16.610 { 00:08:16.610 "name": "mlx5_0", 00:08:16.610 "polls": 5646408, 00:08:16.610 "idle_polls": 5646043, 00:08:16.610 "completions": 438, 00:08:16.610 "requests": 219, 00:08:16.610 "request_latency": 76655736, 00:08:16.610 "pending_free_request": 0, 00:08:16.610 "pending_rdma_read": 0, 00:08:16.610 "pending_rdma_write": 0, 00:08:16.610 "pending_rdma_send": 0, 00:08:16.610 "total_send_wrs": 380, 00:08:16.610 "send_doorbell_updates": 186, 00:08:16.610 "total_recv_wrs": 4315, 00:08:16.610 "recv_doorbell_updates": 187 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "name": "mlx5_1", 00:08:16.610 "polls": 5646408, 00:08:16.610 "idle_polls": 5646408, 00:08:16.610 "completions": 0, 00:08:16.610 "requests": 0, 00:08:16.610 "request_latency": 0, 00:08:16.610 "pending_free_request": 0, 00:08:16.610 "pending_rdma_read": 0, 00:08:16.610 "pending_rdma_write": 0, 00:08:16.610 "pending_rdma_send": 0, 00:08:16.610 "total_send_wrs": 0, 00:08:16.610 "send_doorbell_updates": 0, 00:08:16.610 "total_recv_wrs": 4096, 00:08:16.610 "recv_doorbell_updates": 1 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 }' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1372 > 0 )) 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 201136215 > 0 )) 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.610 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:16.610 rmmod nvme_rdma 00:08:16.610 rmmod nvme_fabrics 00:08:16.867 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.867 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:16.867 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:16.867 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 444128 ']' 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 444128 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 444128 ']' 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 444128 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 444128 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 444128' 00:08:16.868 killing process with pid 444128 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 444128 00:08:16.868 [2024-05-14 23:54:45.986127] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:16.868 23:54:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 444128 00:08:16.868 [2024-05-14 23:54:46.068745] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:17.126 23:54:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.126 23:54:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:17.126 00:08:17.126 real 0m59.833s 00:08:17.126 user 3m47.434s 00:08:17.126 sys 0m3.495s 00:08:17.126 23:54:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.126 23:54:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.126 ************************************ 00:08:17.126 END TEST nvmf_rpc 00:08:17.126 ************************************ 00:08:17.126 23:54:46 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:17.126 23:54:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:17.126 23:54:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.126 23:54:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:17.126 ************************************ 00:08:17.126 START TEST nvmf_invalid 00:08:17.126 ************************************ 00:08:17.126 23:54:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:17.384 * Looking for test storage... 00:08:17.384 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.384 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.384 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:17.384 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.384 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.384 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.385 23:54:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.918 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:19.919 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:19.919 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:19.919 Found net devices under 0000:09:00.0: mlx_0_0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:19.919 Found net devices under 0000:09:00.1: mlx_0_1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:19.919 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.919 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:19.919 altname enp9s0f0np0 00:08:19.919 inet 192.168.100.8/24 scope global mlx_0_0 00:08:19.919 valid_lft forever preferred_lft forever 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:19.919 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.919 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:19.919 altname enp9s0f1np1 00:08:19.919 inet 192.168.100.9/24 scope global mlx_0_1 00:08:19.919 valid_lft forever preferred_lft forever 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:19.919 23:54:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.919 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:19.920 192.168.100.9' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:19.920 192.168.100.9' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:19.920 192.168.100.9' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=453629 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 453629 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 453629 ']' 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.920 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.920 [2024-05-14 23:54:49.102495] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:08:19.920 [2024-05-14 23:54:49.102583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.920 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.920 [2024-05-14 23:54:49.180267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.178 [2024-05-14 23:54:49.303397] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.178 [2024-05-14 23:54:49.303463] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.178 [2024-05-14 23:54:49.303480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.178 [2024-05-14 23:54:49.303493] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.178 [2024-05-14 23:54:49.303505] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.178 [2024-05-14 23:54:49.303594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.178 [2024-05-14 23:54:49.303649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.178 [2024-05-14 23:54:49.303678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.178 [2024-05-14 23:54:49.303681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.178 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2705 00:08:20.436 [2024-05-14 23:54:49.713654] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:20.436 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:20.436 { 00:08:20.436 "nqn": "nqn.2016-06.io.spdk:cnode2705", 00:08:20.436 "tgt_name": "foobar", 00:08:20.436 "method": "nvmf_create_subsystem", 00:08:20.436 "req_id": 1 00:08:20.436 } 00:08:20.436 Got JSON-RPC error response 00:08:20.436 response: 00:08:20.436 { 00:08:20.436 "code": -32603, 00:08:20.436 "message": "Unable to find target foobar" 00:08:20.436 }' 00:08:20.436 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:20.436 { 00:08:20.436 "nqn": "nqn.2016-06.io.spdk:cnode2705", 00:08:20.436 "tgt_name": "foobar", 00:08:20.436 "method": "nvmf_create_subsystem", 00:08:20.436 "req_id": 1 00:08:20.436 } 00:08:20.436 Got JSON-RPC error response 00:08:20.436 response: 00:08:20.436 { 00:08:20.436 "code": -32603, 00:08:20.436 "message": "Unable to find target foobar" 00:08:20.436 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:20.436 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:20.436 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21523 00:08:20.695 [2024-05-14 23:54:49.958518] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21523: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:20.695 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:20.695 { 00:08:20.695 "nqn": "nqn.2016-06.io.spdk:cnode21523", 00:08:20.695 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:20.695 "method": "nvmf_create_subsystem", 00:08:20.695 "req_id": 1 00:08:20.695 } 00:08:20.695 Got JSON-RPC error response 00:08:20.695 response: 00:08:20.695 { 00:08:20.695 "code": -32602, 00:08:20.695 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:20.695 }' 00:08:20.695 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:20.695 { 00:08:20.695 "nqn": "nqn.2016-06.io.spdk:cnode21523", 00:08:20.695 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:20.695 "method": "nvmf_create_subsystem", 00:08:20.695 "req_id": 1 00:08:20.695 } 00:08:20.695 Got JSON-RPC error response 00:08:20.695 response: 00:08:20.695 { 00:08:20.695 "code": -32602, 00:08:20.695 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:20.695 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:20.695 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:20.695 23:54:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12283 00:08:20.954 [2024-05-14 23:54:50.199378] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12283: invalid model number 'SPDK_Controller' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:20.954 { 00:08:20.954 "nqn": "nqn.2016-06.io.spdk:cnode12283", 00:08:20.954 "model_number": "SPDK_Controller\u001f", 00:08:20.954 "method": "nvmf_create_subsystem", 00:08:20.954 "req_id": 1 00:08:20.954 } 00:08:20.954 Got JSON-RPC error response 00:08:20.954 response: 00:08:20.954 { 00:08:20.954 "code": -32602, 00:08:20.954 "message": "Invalid MN SPDK_Controller\u001f" 00:08:20.954 }' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:20.954 { 00:08:20.954 "nqn": "nqn.2016-06.io.spdk:cnode12283", 00:08:20.954 "model_number": "SPDK_Controller\u001f", 00:08:20.954 "method": "nvmf_create_subsystem", 00:08:20.954 "req_id": 1 00:08:20.954 } 00:08:20.954 Got JSON-RPC error response 00:08:20.954 response: 00:08:20.954 { 00:08:20.954 "code": -32602, 00:08:20.954 "message": "Invalid MN SPDK_Controller\u001f" 00:08:20.954 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:20.954 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:08:20.955 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd[6Jz?te&ucMzLa^HK?A1' 00:08:21.213 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'd[6Jz?te&ucMzLa^HK?A1' nqn.2016-06.io.spdk:cnode18412 00:08:21.478 [2024-05-14 23:54:50.576576] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18412: invalid serial number 'd[6Jz?te&ucMzLa^HK?A1' 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:21.478 { 00:08:21.478 "nqn": "nqn.2016-06.io.spdk:cnode18412", 00:08:21.478 "serial_number": "d[6Jz?te&ucMzLa^HK?A1", 00:08:21.478 "method": "nvmf_create_subsystem", 00:08:21.478 "req_id": 1 00:08:21.478 } 00:08:21.478 Got JSON-RPC error response 00:08:21.478 response: 00:08:21.478 { 00:08:21.478 "code": -32602, 00:08:21.478 "message": "Invalid SN d[6Jz?te&ucMzLa^HK?A1" 00:08:21.478 }' 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:21.478 { 00:08:21.478 "nqn": "nqn.2016-06.io.spdk:cnode18412", 00:08:21.478 "serial_number": "d[6Jz?te&ucMzLa^HK?A1", 00:08:21.478 "method": "nvmf_create_subsystem", 00:08:21.478 "req_id": 1 00:08:21.478 } 00:08:21.478 Got JSON-RPC error response 00:08:21.478 response: 00:08:21.478 { 00:08:21.478 "code": -32602, 00:08:21.478 "message": "Invalid SN d[6Jz?te&ucMzLa^HK?A1" 00:08:21.478 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.478 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:21.479 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '1Vk@-:/du<$%yru;@*% +gG"$9UGh5{unx!u$/>Kw' 00:08:21.480 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1Vk@-:/du<$%yru;@*% +gG"$9UGh5{unx!u$/>Kw' nqn.2016-06.io.spdk:cnode4731 00:08:21.764 [2024-05-14 23:54:50.969856] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4731: invalid model number '1Vk@-:/du<$%yru;@*% +gG"$9UGh5{unx!u$/>Kw' 00:08:21.764 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:21.764 { 00:08:21.764 "nqn": "nqn.2016-06.io.spdk:cnode4731", 00:08:21.764 "model_number": "1Vk@-:/du<$%yru;@*% +gG\"$9UGh5{unx!u$/>Kw", 00:08:21.764 "method": "nvmf_create_subsystem", 00:08:21.764 "req_id": 1 00:08:21.764 } 00:08:21.764 Got JSON-RPC error response 00:08:21.764 response: 00:08:21.764 { 00:08:21.764 "code": -32602, 00:08:21.764 "message": "Invalid MN 1Vk@-:/du<$%yru;@*% +gG\"$9UGh5{unx!u$/>Kw" 00:08:21.764 }' 00:08:21.764 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:21.764 { 00:08:21.764 "nqn": "nqn.2016-06.io.spdk:cnode4731", 00:08:21.764 "model_number": "1Vk@-:/du<$%yru;@*% +gG\"$9UGh5{unx!u$/>Kw", 00:08:21.764 "method": "nvmf_create_subsystem", 00:08:21.764 "req_id": 1 00:08:21.764 } 00:08:21.764 Got JSON-RPC error response 00:08:21.764 response: 00:08:21.764 { 00:08:21.764 "code": -32602, 00:08:21.764 "message": "Invalid MN 1Vk@-:/du<$%yru;@*% +gG\"$9UGh5{unx!u$/>Kw" 00:08:21.764 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:21.764 23:54:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:08:22.022 [2024-05-14 23:54:51.238482] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10c82f0/0x10cc7e0) succeed. 00:08:22.022 [2024-05-14 23:54:51.249244] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10c9930/0x110de70) succeed. 00:08:22.280 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:22.538 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:08:22.538 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:08:22.538 192.168.100.9' 00:08:22.538 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:22.538 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:08:22.538 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:08:22.796 [2024-05-14 23:54:51.900185] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:22.796 [2024-05-14 23:54:51.900299] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:22.796 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:22.796 { 00:08:22.796 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:22.796 "listen_address": { 00:08:22.796 "trtype": "rdma", 00:08:22.796 "traddr": "192.168.100.8", 00:08:22.796 "trsvcid": "4421" 00:08:22.796 }, 00:08:22.796 "method": "nvmf_subsystem_remove_listener", 00:08:22.796 "req_id": 1 00:08:22.796 } 00:08:22.796 Got JSON-RPC error response 00:08:22.796 response: 00:08:22.796 { 00:08:22.796 "code": -32602, 00:08:22.796 "message": "Invalid parameters" 00:08:22.796 }' 00:08:22.796 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:22.796 { 00:08:22.796 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:22.796 "listen_address": { 00:08:22.796 "trtype": "rdma", 00:08:22.796 "traddr": "192.168.100.8", 00:08:22.796 "trsvcid": "4421" 00:08:22.796 }, 00:08:22.796 "method": "nvmf_subsystem_remove_listener", 00:08:22.796 "req_id": 1 00:08:22.796 } 00:08:22.796 Got JSON-RPC error response 00:08:22.796 response: 00:08:22.796 { 00:08:22.796 "code": -32602, 00:08:22.796 "message": "Invalid parameters" 00:08:22.796 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:22.796 23:54:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8930 -i 0 00:08:22.796 [2024-05-14 23:54:52.141048] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8930: invalid cntlid range [0-65519] 00:08:23.054 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:23.054 { 00:08:23.054 "nqn": "nqn.2016-06.io.spdk:cnode8930", 00:08:23.054 "min_cntlid": 0, 00:08:23.054 "method": "nvmf_create_subsystem", 00:08:23.054 "req_id": 1 00:08:23.054 } 00:08:23.054 Got JSON-RPC error response 00:08:23.054 response: 00:08:23.054 { 00:08:23.054 "code": -32602, 00:08:23.054 "message": "Invalid cntlid range [0-65519]" 00:08:23.054 }' 00:08:23.054 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:23.054 { 00:08:23.054 "nqn": "nqn.2016-06.io.spdk:cnode8930", 00:08:23.054 "min_cntlid": 0, 00:08:23.054 "method": "nvmf_create_subsystem", 00:08:23.054 "req_id": 1 00:08:23.054 } 00:08:23.054 Got JSON-RPC error response 00:08:23.054 response: 00:08:23.054 { 00:08:23.054 "code": -32602, 00:08:23.054 "message": "Invalid cntlid range [0-65519]" 00:08:23.054 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.054 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7459 -i 65520 00:08:23.054 [2024-05-14 23:54:52.389911] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7459: invalid cntlid range [65520-65519] 00:08:23.312 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:23.312 { 00:08:23.312 "nqn": "nqn.2016-06.io.spdk:cnode7459", 00:08:23.312 "min_cntlid": 65520, 00:08:23.312 "method": "nvmf_create_subsystem", 00:08:23.312 "req_id": 1 00:08:23.312 } 00:08:23.312 Got JSON-RPC error response 00:08:23.312 response: 00:08:23.312 { 00:08:23.312 "code": -32602, 00:08:23.312 "message": "Invalid cntlid range [65520-65519]" 00:08:23.312 }' 00:08:23.312 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:23.312 { 00:08:23.312 "nqn": "nqn.2016-06.io.spdk:cnode7459", 00:08:23.312 "min_cntlid": 65520, 00:08:23.312 "method": "nvmf_create_subsystem", 00:08:23.312 "req_id": 1 00:08:23.312 } 00:08:23.312 Got JSON-RPC error response 00:08:23.312 response: 00:08:23.312 { 00:08:23.312 "code": -32602, 00:08:23.312 "message": "Invalid cntlid range [65520-65519]" 00:08:23.312 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.312 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode920 -I 0 00:08:23.312 [2024-05-14 23:54:52.650867] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode920: invalid cntlid range [1-0] 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:23.570 { 00:08:23.570 "nqn": "nqn.2016-06.io.spdk:cnode920", 00:08:23.570 "max_cntlid": 0, 00:08:23.570 "method": "nvmf_create_subsystem", 00:08:23.570 "req_id": 1 00:08:23.570 } 00:08:23.570 Got JSON-RPC error response 00:08:23.570 response: 00:08:23.570 { 00:08:23.570 "code": -32602, 00:08:23.570 "message": "Invalid cntlid range [1-0]" 00:08:23.570 }' 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:23.570 { 00:08:23.570 "nqn": "nqn.2016-06.io.spdk:cnode920", 00:08:23.570 "max_cntlid": 0, 00:08:23.570 "method": "nvmf_create_subsystem", 00:08:23.570 "req_id": 1 00:08:23.570 } 00:08:23.570 Got JSON-RPC error response 00:08:23.570 response: 00:08:23.570 { 00:08:23.570 "code": -32602, 00:08:23.570 "message": "Invalid cntlid range [1-0]" 00:08:23.570 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22155 -I 65520 00:08:23.570 [2024-05-14 23:54:52.895781] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22155: invalid cntlid range [1-65520] 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:23.570 { 00:08:23.570 "nqn": "nqn.2016-06.io.spdk:cnode22155", 00:08:23.570 "max_cntlid": 65520, 00:08:23.570 "method": "nvmf_create_subsystem", 00:08:23.570 "req_id": 1 00:08:23.570 } 00:08:23.570 Got JSON-RPC error response 00:08:23.570 response: 00:08:23.570 { 00:08:23.570 "code": -32602, 00:08:23.570 "message": "Invalid cntlid range [1-65520]" 00:08:23.570 }' 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:23.570 { 00:08:23.570 "nqn": "nqn.2016-06.io.spdk:cnode22155", 00:08:23.570 "max_cntlid": 65520, 00:08:23.570 "method": "nvmf_create_subsystem", 00:08:23.570 "req_id": 1 00:08:23.570 } 00:08:23.570 Got JSON-RPC error response 00:08:23.570 response: 00:08:23.570 { 00:08:23.570 "code": -32602, 00:08:23.570 "message": "Invalid cntlid range [1-65520]" 00:08:23.570 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.570 23:54:52 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30894 -i 6 -I 5 00:08:23.828 [2024-05-14 23:54:53.144703] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30894: invalid cntlid range [6-5] 00:08:23.828 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:23.828 { 00:08:23.828 "nqn": "nqn.2016-06.io.spdk:cnode30894", 00:08:23.828 "min_cntlid": 6, 00:08:23.828 "max_cntlid": 5, 00:08:23.828 "method": "nvmf_create_subsystem", 00:08:23.828 "req_id": 1 00:08:23.828 } 00:08:23.828 Got JSON-RPC error response 00:08:23.828 response: 00:08:23.828 { 00:08:23.828 "code": -32602, 00:08:23.828 "message": "Invalid cntlid range [6-5]" 00:08:23.828 }' 00:08:23.828 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:23.828 { 00:08:23.828 "nqn": "nqn.2016-06.io.spdk:cnode30894", 00:08:23.828 "min_cntlid": 6, 00:08:23.828 "max_cntlid": 5, 00:08:23.828 "method": "nvmf_create_subsystem", 00:08:23.828 "req_id": 1 00:08:23.828 } 00:08:23.828 Got JSON-RPC error response 00:08:23.828 response: 00:08:23.828 { 00:08:23.828 "code": -32602, 00:08:23.828 "message": "Invalid cntlid range [6-5]" 00:08:23.828 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.828 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:24.086 { 00:08:24.086 "name": "foobar", 00:08:24.086 "method": "nvmf_delete_target", 00:08:24.086 "req_id": 1 00:08:24.086 } 00:08:24.086 Got JSON-RPC error response 00:08:24.086 response: 00:08:24.086 { 00:08:24.086 "code": -32602, 00:08:24.086 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:24.086 }' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:24.086 { 00:08:24.086 "name": "foobar", 00:08:24.086 "method": "nvmf_delete_target", 00:08:24.086 "req_id": 1 00:08:24.086 } 00:08:24.086 Got JSON-RPC error response 00:08:24.086 response: 00:08:24.086 { 00:08:24.086 "code": -32602, 00:08:24.086 "message": "The specified target doesn't exist, cannot delete it." 00:08:24.086 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:24.086 rmmod nvme_rdma 00:08:24.086 rmmod nvme_fabrics 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 453629 ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 453629 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 453629 ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 453629 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 453629 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 453629' 00:08:24.086 killing process with pid 453629 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 453629 00:08:24.086 [2024-05-14 23:54:53.343348] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:24.086 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 453629 00:08:24.086 [2024-05-14 23:54:53.432720] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:24.652 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.652 23:54:53 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:24.652 00:08:24.652 real 0m7.269s 00:08:24.652 user 0m21.047s 00:08:24.652 sys 0m2.801s 00:08:24.652 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.652 23:54:53 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:24.652 ************************************ 00:08:24.652 END TEST nvmf_invalid 00:08:24.652 ************************************ 00:08:24.652 23:54:53 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:24.652 23:54:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.652 23:54:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.652 23:54:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:24.652 ************************************ 00:08:24.652 START TEST nvmf_abort 00:08:24.652 ************************************ 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:24.652 * Looking for test storage... 00:08:24.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.652 23:54:53 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.653 23:54:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.183 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:27.184 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:27.184 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:27.184 Found net devices under 0000:09:00.0: mlx_0_0 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:27.184 Found net devices under 0000:09:00.1: mlx_0_1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:27.184 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.184 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:27.184 altname enp9s0f0np0 00:08:27.184 inet 192.168.100.8/24 scope global mlx_0_0 00:08:27.184 valid_lft forever preferred_lft forever 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:27.184 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:27.184 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.184 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:27.184 altname enp9s0f1np1 00:08:27.184 inet 192.168.100.9/24 scope global mlx_0_1 00:08:27.185 valid_lft forever preferred_lft forever 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:27.185 192.168.100.9' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:27.185 192.168.100.9' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:27.185 192.168.100.9' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=456415 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 456415 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 456415 ']' 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.185 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.185 [2024-05-14 23:54:56.482953] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:08:27.185 [2024-05-14 23:54:56.483046] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.442 [2024-05-14 23:54:56.561180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.442 [2024-05-14 23:54:56.684698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.442 [2024-05-14 23:54:56.684767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.442 [2024-05-14 23:54:56.684783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.442 [2024-05-14 23:54:56.684796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.442 [2024-05-14 23:54:56.684807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.442 [2024-05-14 23:54:56.684891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.442 [2024-05-14 23:54:56.684943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.442 [2024-05-14 23:54:56.684948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.700 [2024-05-14 23:54:56.854398] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21bd160/0x21c1650) succeed. 00:08:27.700 [2024-05-14 23:54:56.864836] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21be700/0x2202ce0) succeed. 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.700 23:54:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.700 Malloc0 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.700 Delay0 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.700 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.701 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.701 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:27.701 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.701 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.701 [2024-05-14 23:54:57.045235] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:27.701 [2024-05-14 23:54:57.045566] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.958 23:54:57 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:27.958 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.958 [2024-05-14 23:54:57.137900] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:30.485 Initializing NVMe Controllers 00:08:30.485 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:30.485 controller IO queue size 128 less than required 00:08:30.485 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:30.485 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:30.485 Initialization complete. Launching workers. 00:08:30.485 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41089 00:08:30.485 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41150, failed to submit 62 00:08:30.485 success 41090, unsuccess 60, failed 0 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:30.485 rmmod nvme_rdma 00:08:30.485 rmmod nvme_fabrics 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 456415 ']' 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 456415 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 456415 ']' 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 456415 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 456415 00:08:30.485 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 456415' 00:08:30.486 killing process with pid 456415 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # kill 456415 00:08:30.486 [2024-05-14 23:54:59.322136] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@970 -- # wait 456415 00:08:30.486 [2024-05-14 23:54:59.393155] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:30.486 00:08:30.486 real 0m5.962s 00:08:30.486 user 0m11.861s 00:08:30.486 sys 0m2.289s 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.486 23:54:59 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.486 ************************************ 00:08:30.486 END TEST nvmf_abort 00:08:30.486 ************************************ 00:08:30.486 23:54:59 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:30.486 23:54:59 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:30.486 23:54:59 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.486 23:54:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:30.486 ************************************ 00:08:30.486 START TEST nvmf_ns_hotplug_stress 00:08:30.486 ************************************ 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:30.486 * Looking for test storage... 00:08:30.486 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.486 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.745 23:54:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.279 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:33.280 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:33.280 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:33.280 Found net devices under 0000:09:00.0: mlx_0_0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:33.280 Found net devices under 0000:09:00.1: mlx_0_1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:33.280 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:33.280 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:33.280 altname enp9s0f0np0 00:08:33.280 inet 192.168.100.8/24 scope global mlx_0_0 00:08:33.280 valid_lft forever preferred_lft forever 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:33.280 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:33.280 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:33.280 altname enp9s0f1np1 00:08:33.280 inet 192.168.100.9/24 scope global mlx_0_1 00:08:33.280 valid_lft forever preferred_lft forever 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:33.280 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:33.281 192.168.100.9' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:33.281 192.168.100.9' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:33.281 192.168.100.9' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=458789 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 458789 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 458789 ']' 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.281 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.281 [2024-05-14 23:55:02.427836] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:08:33.281 [2024-05-14 23:55:02.427928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.281 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.281 [2024-05-14 23:55:02.498303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.281 [2024-05-14 23:55:02.609097] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.281 [2024-05-14 23:55:02.609154] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.281 [2024-05-14 23:55:02.609169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.281 [2024-05-14 23:55:02.609180] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.281 [2024-05-14 23:55:02.609189] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.281 [2024-05-14 23:55:02.609273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.281 [2024-05-14 23:55:02.609336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.281 [2024-05-14 23:55:02.609340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:33.539 23:55:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:33.798 [2024-05-14 23:55:02.988804] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e56160/0x1e5a650) succeed. 00:08:33.798 [2024-05-14 23:55:02.999167] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e57700/0x1e9bce0) succeed. 00:08:33.798 23:55:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:34.362 23:55:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.362 [2024-05-14 23:55:03.641042] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:34.362 [2024-05-14 23:55:03.641363] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.362 23:55:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:34.620 23:55:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:34.877 Malloc0 00:08:34.878 23:55:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:35.135 Delay0 00:08:35.135 23:55:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.392 23:55:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:35.650 NULL1 00:08:35.650 23:55:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:35.908 23:55:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=459204 00:08:35.908 23:55:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:35.908 23:55:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:35.908 23:55:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.908 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.309 Read completed with error (sct=0, sc=11) 00:08:37.309 23:55:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.309 23:55:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:37.309 23:55:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:37.572 true 00:08:37.572 23:55:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:37.572 23:55:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 23:55:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.762 23:55:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:38.762 23:55:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:38.762 true 00:08:39.020 23:55:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:39.020 23:55:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.585 23:55:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.843 23:55:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:39.843 23:55:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:40.101 true 00:08:40.101 23:55:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:40.101 23:55:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 23:55:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.290 23:55:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:41.290 23:55:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:41.547 true 00:08:41.547 23:55:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:41.547 23:55:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.479 23:55:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.479 23:55:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:42.479 23:55:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:42.737 true 00:08:42.737 23:55:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:42.737 23:55:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 23:55:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.923 23:55:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:43.923 23:55:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:43.923 true 00:08:44.180 23:55:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:44.180 23:55:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.744 23:55:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.001 23:55:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:45.001 23:55:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:45.258 true 00:08:45.258 23:55:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:45.258 23:55:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.189 23:55:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.446 23:55:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:46.446 23:55:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:46.446 true 00:08:46.446 23:55:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:46.446 23:55:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.377 23:55:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.633 23:55:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:47.633 23:55:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:47.890 true 00:08:47.890 23:55:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:47.890 23:55:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 23:55:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.820 23:55:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:48.820 23:55:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:49.077 true 00:08:49.077 23:55:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:49.077 23:55:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 23:55:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.267 23:55:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:50.267 23:55:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:50.267 true 00:08:50.267 23:55:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:50.267 23:55:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.199 23:55:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.463 23:55:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:51.463 23:55:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:51.770 true 00:08:51.770 23:55:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:51.770 23:55:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.336 23:55:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.594 23:55:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:52.594 23:55:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:53.159 true 00:08:53.159 23:55:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:53.159 23:55:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.723 23:55:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.980 23:55:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:53.980 23:55:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:54.238 true 00:08:54.238 23:55:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:54.238 23:55:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 23:55:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.170 23:55:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:55.170 23:55:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:55.428 true 00:08:55.428 23:55:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:55.428 23:55:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.360 23:55:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.618 23:55:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:56.618 23:55:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:56.618 true 00:08:56.618 23:55:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:56.618 23:55:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.875 23:55:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.133 23:55:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:57.133 23:55:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:57.391 true 00:08:57.391 23:55:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:57.391 23:55:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.323 23:55:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.588 [2024-05-14 23:55:27.870337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.870984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.871032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.871077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.588 [2024-05-14 23:55:27.871122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.871811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.872970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.873937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.874954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.589 [2024-05-14 23:55:27.875682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.875992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.876967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.877986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.878991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.879958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.590 [2024-05-14 23:55:27.880446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.880991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.881990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.882960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.883836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.884976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.591 [2024-05-14 23:55:27.885322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.885990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.886954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.887955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 23:55:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:58.592 [2024-05-14 23:55:27.888590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 23:55:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:58.592 [2024-05-14 23:55:27.888687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.888975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.592 [2024-05-14 23:55:27.889891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.889960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.890969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.891964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.892955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.593 [2024-05-14 23:55:27.893426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.893962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.894962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.895973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.896959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.897982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.898032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.898079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.898128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.898172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.594 [2024-05-14 23:55:27.898233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.898962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.899958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.900994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.901974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.902996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.595 [2024-05-14 23:55:27.903050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.903969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.904973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.905995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.906992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.596 [2024-05-14 23:55:27.907523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.907972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.908971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.909988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.910963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.911993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.597 [2024-05-14 23:55:27.912043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.912985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.913969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.914953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.915844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.916046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.916113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.598 [2024-05-14 23:55:27.916163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.916958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.917985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.918941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.599 [2024-05-14 23:55:27.919528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.919988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.920981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.599 [2024-05-14 23:55:27.921519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.921982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.922993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.923967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.924958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.925950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.600 [2024-05-14 23:55:27.926469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.926965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.927959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.928982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.601 [2024-05-14 23:55:27.929402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.887 [2024-05-14 23:55:27.929475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.929972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.930998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.931966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.932939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.933981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.888 [2024-05-14 23:55:27.934749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.934794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.934837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.934883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.934926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.934997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.935955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.936993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.937811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.938976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.889 [2024-05-14 23:55:27.939833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.939876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.939940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.939993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.940999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.941975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.942968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.943962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.890 [2024-05-14 23:55:27.944897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.944964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.945952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.946979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.947962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.948980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.949969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.950015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.950061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.891 [2024-05-14 23:55:27.950106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.950990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.951985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.952964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.953958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.954005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.954049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.954096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.954141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.892 [2024-05-14 23:55:27.954191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.954996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.955955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.956981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.957984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.958991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.893 [2024-05-14 23:55:27.959451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.959970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.960864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.961956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.962958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.963997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.894 [2024-05-14 23:55:27.964562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.964960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.965994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.966966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.967966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.968972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.895 [2024-05-14 23:55:27.969586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.969956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.970987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.971958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.896 [2024-05-14 23:55:27.972460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.972980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.896 [2024-05-14 23:55:27.973798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.973847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.973888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.974954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.975977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.976962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.977991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.978839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.979070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.979132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.897 [2024-05-14 23:55:27.979175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.979982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.980976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.981994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.982987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.983901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.898 [2024-05-14 23:55:27.984477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.984985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.985967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.986959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.987993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.988838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.899 [2024-05-14 23:55:27.989617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.989985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.990977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.991993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.992975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.993946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.900 [2024-05-14 23:55:27.994760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.994818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.994861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.994926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.994985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.995961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.996942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.997970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.998995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.901 [2024-05-14 23:55:27.999352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:27.999972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.000970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.001988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.002997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.003987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.902 [2024-05-14 23:55:28.004457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.004927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.005984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.006983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.007964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.008972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.903 [2024-05-14 23:55:28.009557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.009975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.010996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.011856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.012956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.013955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.904 [2024-05-14 23:55:28.014685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.014966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.015992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.016971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.017952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.018984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.019031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.019074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.019120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.905 [2024-05-14 23:55:28.019167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.019998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.020970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.021974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.022975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.906 [2024-05-14 23:55:28.023467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.023995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.024040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.024086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.906 [2024-05-14 23:55:28.024130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.024902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.025966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.026991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.027966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.028952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.907 [2024-05-14 23:55:28.029422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.029956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.030981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.031961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.032974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.033968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.908 [2024-05-14 23:55:28.034497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.034808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.035996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.036987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.037985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.038967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.909 [2024-05-14 23:55:28.039621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.039665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.039726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.039772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.039969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.040963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.041964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.042977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.043961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.044007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.044051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.910 [2024-05-14 23:55:28.044093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.044955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.045993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.046974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.047950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.048997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.911 [2024-05-14 23:55:28.049333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.049952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.050954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.051959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.052987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.053952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.912 [2024-05-14 23:55:28.054501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.054978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.055956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.056988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.057892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.058955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.913 [2024-05-14 23:55:28.059817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.059863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.059906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.059984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.060967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.061993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.062878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.063996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.064050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.064094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.064135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.064181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.914 [2024-05-14 23:55:28.064239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.064996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.065980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.066971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.067957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.068965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.915 [2024-05-14 23:55:28.069481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.069957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.070973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.071998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.072857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.073977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.916 [2024-05-14 23:55:28.074481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.917 [2024-05-14 23:55:28.074699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.074953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.075984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.076953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.077828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.078980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.917 [2024-05-14 23:55:28.079731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.079778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.079820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.079864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.079905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.079971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.080991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.081982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.082977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.083956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.918 [2024-05-14 23:55:28.084840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.084884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.084950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.085970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.086994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.087960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.088986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.089983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.090025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.090068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.090117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.090160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.919 [2024-05-14 23:55:28.090209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.090960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.091998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.092994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.093960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.094953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.920 [2024-05-14 23:55:28.095773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.095818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.095865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.095911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.096978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.097978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.098986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.099966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.100953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.921 [2024-05-14 23:55:28.101753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.101797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.101842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.101888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.101956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.102987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.103952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.104951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.105961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.106967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.107015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.107062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.922 [2024-05-14 23:55:28.107105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.107950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.108970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.109957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.110852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.111962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.112997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.113046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.113093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.113137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.923 [2024-05-14 23:55:28.113181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.113956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.114950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.115842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.116970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.117996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.118970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.119016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.119069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.119114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.924 [2024-05-14 23:55:28.119157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.119972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.120991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.121956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.122957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.123952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.124977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.925 [2024-05-14 23:55:28.125340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.125852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.926 [2024-05-14 23:55:28.126147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.126978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.127986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.128974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.129981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.926 [2024-05-14 23:55:28.130668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.130715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.130898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.130963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.131968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.132996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.133996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.134956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.135980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.136994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.137036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.927 [2024-05-14 23:55:28.137080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.137980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.138925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.139952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.140977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.141981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.142975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.928 [2024-05-14 23:55:28.143921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.143971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.144958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.145984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 true 00:08:58.929 [2024-05-14 23:55:28.146770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.146969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.147951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.929 [2024-05-14 23:55:28.148267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.148972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.149969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.150960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.930 [2024-05-14 23:55:28.151674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.151956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.152971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.153998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.154971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.931 [2024-05-14 23:55:28.155624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.155672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.155718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.155911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.155984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.156956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.157973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.158968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.159969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.160016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.160061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.160104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.160150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.932 [2024-05-14 23:55:28.160193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.160976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:58.933 [2024-05-14 23:55:28.161679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.933 [2024-05-14 23:55:28.161816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.161960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.162999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.163983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.933 [2024-05-14 23:55:28.164795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.164839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.164885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.164954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.165891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.166959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.167992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.168990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.934 [2024-05-14 23:55:28.169573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.169978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.170960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.171969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.172812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.935 [2024-05-14 23:55:28.173577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.173958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.174997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.175946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.176998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.936 [2024-05-14 23:55:28.177575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.177963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.178987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.179773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:58.937 [2024-05-14 23:55:28.180020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.180949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.181953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.937 [2024-05-14 23:55:28.182435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.182976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.183990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.184960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.185955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.186979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.938 [2024-05-14 23:55:28.187578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.187950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.188992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.189986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.190969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.191840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.939 [2024-05-14 23:55:28.192358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.192970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.193956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.194988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.195034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.195080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:58.940 [2024-05-14 23:55:28.195125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.216 [2024-05-14 23:55:28.208236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.216 [2024-05-14 23:55:28.208297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.216 [2024-05-14 23:55:28.208486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.208958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.209994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.210962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.211989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.212986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.217 [2024-05-14 23:55:28.213367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.213996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.214962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.215974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.216928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.217990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.218 [2024-05-14 23:55:28.218664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.218959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.219960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.220980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.221941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.222962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.219 [2024-05-14 23:55:28.223795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.223838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.223883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.223927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.223978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.224977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.225956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.226998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.227982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.220 [2024-05-14 23:55:28.228839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.228899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.228982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.229995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.230954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.231968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.221 [2024-05-14 23:55:28.232598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.232965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.233984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.234880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.235974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.236952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.222 [2024-05-14 23:55:28.237775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.237822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.237868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.237911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.237961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.238971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.239919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.240956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.241959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.223 [2024-05-14 23:55:28.242979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.243955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.244811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:59.224 [2024-05-14 23:55:28.245008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.245980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.246972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.247988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.248034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.224 [2024-05-14 23:55:28.248078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.248977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.249928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.250956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.251962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.225 [2024-05-14 23:55:28.252894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.252950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.253990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.254964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.255956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.256983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.226 [2024-05-14 23:55:28.257414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.257997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.258970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.259968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.260953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.261985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.227 [2024-05-14 23:55:28.262479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.262840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.263953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.264959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.265980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.266976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.228 [2024-05-14 23:55:28.267644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.267689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.267731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.267773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.267963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.268959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.269988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.270989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.271975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.229 [2024-05-14 23:55:28.272907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.272971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.273956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.274985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.275961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.276981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.230 [2024-05-14 23:55:28.277352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.277969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.278986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.279998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.280995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.281983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.231 [2024-05-14 23:55:28.282510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.282951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.283968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.284989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.285959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.286984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.232 [2024-05-14 23:55:28.287801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.287862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.287910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.287961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.288994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.289986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.290944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.291991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.233 [2024-05-14 23:55:28.292983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.293963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.294969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.295893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:59.234 [2024-05-14 23:55:28.296082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.296986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.234 [2024-05-14 23:55:28.297263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.297953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.298991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.299959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.300926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.301986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.302032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.302079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.302123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.302168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.302212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.314799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.314858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.314905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.314959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.315007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.315053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.235 [2024-05-14 23:55:28.315102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.315954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.316827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.317982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.318973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.319987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.236 [2024-05-14 23:55:28.320491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.320953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.321816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.322968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.323987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.324986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.237 [2024-05-14 23:55:28.325711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.325982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.326842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.327996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.328977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.329963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.330992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.238 [2024-05-14 23:55:28.331044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.331954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.332959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.333969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.239 [2024-05-14 23:55:28.334826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.334869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.334916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.334965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.335994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.336881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.337961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.338988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.339993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.340042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.240 [2024-05-14 23:55:28.340091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.340982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.341906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.342992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.343991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.241 [2024-05-14 23:55:28.344544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.344970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.345998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.346907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.347978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.348971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.242 [2024-05-14 23:55:28.349754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.349801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.349844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.349892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.349944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.349990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.350972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.351819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.352995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.353996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.243 [2024-05-14 23:55:28.354848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.354891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.354950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.355965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.356963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.357976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.358962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.359972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.360016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.244 [2024-05-14 23:55:28.360061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:59.245 [2024-05-14 23:55:28.360368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.360963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.361956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.362955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.363978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.245 [2024-05-14 23:55:28.364988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.365964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.366978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.367983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.368992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.246 [2024-05-14 23:55:28.369494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.369839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.370963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.371982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.372998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.373995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.247 [2024-05-14 23:55:28.374590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.374634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.374675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.374716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.374757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.374956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.375997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.376973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.377997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.378991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.379799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.380036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.248 [2024-05-14 23:55:28.380101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.380998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.381986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.382992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.383037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.383079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 [2024-05-14 23:55:28.383122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.249 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.507 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:59.507 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:59.507 [2024-05-14 23:55:28.767072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:59.765 true 00:08:59.765 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:08:59.765 23:55:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.027 23:55:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.304 23:55:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:00.304 23:55:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:00.304 true 00:09:00.304 23:55:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:00.304 23:55:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 23:55:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.698 [2024-05-14 23:55:30.885003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.698 [2024-05-14 23:55:30.885088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.698 [2024-05-14 23:55:30.885138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.885972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.886963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.887971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.888956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.699 [2024-05-14 23:55:30.889341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.889841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.890970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.891970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.700 [2024-05-14 23:55:30.892796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.892839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.892879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.892946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.893996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.894926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.895975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.701 [2024-05-14 23:55:30.896411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.896952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.897997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.898980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.702 [2024-05-14 23:55:30.899971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.900998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.901956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.902985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.703 [2024-05-14 23:55:30.903667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 23:55:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:01.704 [2024-05-14 23:55:30.903714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.903765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.903818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 23:55:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:01.704 [2024-05-14 23:55:30.903866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.903939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.903988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.904986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.905975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.906823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.907037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.704 [2024-05-14 23:55:30.907102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.907959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.908983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.705 [2024-05-14 23:55:30.909517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.909993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.910953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.911881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.912984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.913031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.913078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.706 [2024-05-14 23:55:30.913125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.913947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.914952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.915995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.707 [2024-05-14 23:55:30.916738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.916784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.916828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.916869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.916938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.916990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.917954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.918897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.919999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.708 [2024-05-14 23:55:30.920702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.920766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.920811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.920856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.920898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.920979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.921998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:01.709 [2024-05-14 23:55:30.922437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.922980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.923898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.924946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.709 [2024-05-14 23:55:30.925245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.925976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.926989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.927945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.928928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.929972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.930017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.710 [2024-05-14 23:55:30.930063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.711 [2024-05-14 23:55:30.930405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.930964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.931996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.932965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.933992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.934963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.712 [2024-05-14 23:55:30.935563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.935609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.935652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.935833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.935898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.935984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.936966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.937964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.938956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.939992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.713 [2024-05-14 23:55:30.940965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.941953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.942960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.943962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.944981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.945974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.946018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.714 [2024-05-14 23:55:30.946064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.946974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.947966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.948927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.949972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.715 [2024-05-14 23:55:30.950574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.950621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.950800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.950856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.950905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.950970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.951956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.952991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.953842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.954048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.954111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.954159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.954204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.716 [2024-05-14 23:55:30.954264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.954961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.955998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.956974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.957960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.717 [2024-05-14 23:55:30.958292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.958819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.959997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.960972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.718 [2024-05-14 23:55:30.961792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.961835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.961875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.961944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.961990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.962988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.963924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.964958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.719 [2024-05-14 23:55:30.965796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.965840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.965882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.965947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.965993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.966999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.967987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.968967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.720 [2024-05-14 23:55:30.969607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.969986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.970966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.971981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.972994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.721 [2024-05-14 23:55:30.973321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.973984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.974984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:01.722 [2024-05-14 23:55:30.975446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.975975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.976022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.722 [2024-05-14 23:55:30.976062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.976866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.977958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.978954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.723 [2024-05-14 23:55:30.979806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.979847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.979886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.979936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.980983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.981982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.982990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.724 [2024-05-14 23:55:30.983981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.984985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.985957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.986998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.987993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.988042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.725 [2024-05-14 23:55:30.988087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.988955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.989828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.990962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.991963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.992964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.726 [2024-05-14 23:55:30.993773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.993817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.993859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.993900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.993972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.994926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.995972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.996971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.997979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.998994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.999040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.999085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.999130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.727 [2024-05-14 23:55:30.999173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:30.999984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.000958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.001986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.002927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.003988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.004988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.728 [2024-05-14 23:55:31.005578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.005995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.006973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.007832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.008977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.009959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.010962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.011970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.012017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.012061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.012108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.729 [2024-05-14 23:55:31.012150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.012989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.013988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.014961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.015979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.016998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.017954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.730 [2024-05-14 23:55:31.018601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.018975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.019993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.020975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.021992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.022999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.023966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.024979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.731 [2024-05-14 23:55:31.025028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:01.732 [2024-05-14 23:55:31.025457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.025974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.021 [2024-05-14 23:55:31.026301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.021 [2024-05-14 23:55:31.026648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.026985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.027976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.028984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.029992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.022 [2024-05-14 23:55:31.030718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.030764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.030809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.030858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.030902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.030957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.031977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.032843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.033981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.034968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.023 [2024-05-14 23:55:31.035560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.035985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.036962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.037806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.038996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.024 [2024-05-14 23:55:31.039847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.039888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.039938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.039986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.040960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.041976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.042979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.043965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.025 [2024-05-14 23:55:31.044010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.044953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.045970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.046956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.026 [2024-05-14 23:55:31.047945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.048961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.049978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.050968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.051971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.027 [2024-05-14 23:55:31.052023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.052966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.053979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.054973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.055993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.028 [2024-05-14 23:55:31.056711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.056990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.057956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.058977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.059989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.060897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.061087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.061150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.061196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.061241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.029 [2024-05-14 23:55:31.061288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.061960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.062960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.063967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.064998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.030 [2024-05-14 23:55:31.065443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.065967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.066973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.067973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.068981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.031 [2024-05-14 23:55:31.069692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.069984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.070975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.071986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.072998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.073050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.073093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.073140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.032 [2024-05-14 23:55:31.073190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.073980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.074999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.075973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.076988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.033 [2024-05-14 23:55:31.077508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.033 [2024-05-14 23:55:31.077960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.078972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.079961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.080989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.081477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.034 [2024-05-14 23:55:31.094584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.094996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.095986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.096956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.097984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.035 [2024-05-14 23:55:31.098936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.098991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.099969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.100965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.101991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.102955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.036 [2024-05-14 23:55:31.103525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.103980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.104971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.105972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.037 [2024-05-14 23:55:31.106783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.106829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.106874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.106917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.106967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.107863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.108965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.109995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.110953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.111001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.038 [2024-05-14 23:55:31.111044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.111974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.112999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.113960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.114955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.115955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.116001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.116198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.039 [2024-05-14 23:55:31.116262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.116980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.117988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.118955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.119980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.120982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.040 [2024-05-14 23:55:31.121203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.121973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.122980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.123998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 true 00:09:02.041 [2024-05-14 23:55:31.124830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.124976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.125956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.041 [2024-05-14 23:55:31.126401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.126964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.127988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.128974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.129984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.042 [2024-05-14 23:55:31.130363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.130985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.131976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.132992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.133992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.134975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.043 [2024-05-14 23:55:31.135389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.135951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.136970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.137971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.138970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:02.044 [2024-05-14 23:55:31.139114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.044 [2024-05-14 23:55:31.139257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.139952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.044 [2024-05-14 23:55:31.140635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.140981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.045 [2024-05-14 23:55:31.141372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.141974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.142817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.143946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.144969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.145984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.146030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.146077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.146126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.045 [2024-05-14 23:55:31.146173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.146999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.147876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.148995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.149975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.046 [2024-05-14 23:55:31.150939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.150991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.151966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.152962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.153980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.154991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.155997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.047 [2024-05-14 23:55:31.156820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.156862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.156905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.156986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.157963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.158976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.159993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.160949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.161975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.162022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.162066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.162117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.162161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.048 [2024-05-14 23:55:31.162222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.162952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.163941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.164796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.165986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.166964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.049 [2024-05-14 23:55:31.167842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.167884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.167949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.168943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.169944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.170939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.171955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.050 [2024-05-14 23:55:31.172731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.172774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.172815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.172864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.172908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.172993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.173953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.174985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.175993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.176961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.177968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.178979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.179023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.179071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.179116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.051 [2024-05-14 23:55:31.179161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.179957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.180954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.181970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.182964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.183927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.184901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.052 [2024-05-14 23:55:31.185724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.185764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.185806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.185846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.185886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.185953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.186958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.187961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.188994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.189815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.190975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.053 [2024-05-14 23:55:31.191401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.191990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.192969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.193977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.054 [2024-05-14 23:55:31.194895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.194977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.195991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.054 [2024-05-14 23:55:31.196816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.196857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.196900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.196961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.197962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.198951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.199963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.200972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.201979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.202859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.055 [2024-05-14 23:55:31.203444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.203988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.204966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.205985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.206985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.207771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.208998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.056 [2024-05-14 23:55:31.209329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.209985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.210954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.211975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.212991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.213991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.214979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.215972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.057 [2024-05-14 23:55:31.216418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.216979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.217958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.218964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.219986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.220944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.221971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.058 [2024-05-14 23:55:31.222476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.222953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.223991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.224965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.225959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.226985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.227979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.228977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.229027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.229204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.059 [2024-05-14 23:55:31.229282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.229967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.230998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.231991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.232975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.233967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.234962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.235956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.060 [2024-05-14 23:55:31.236633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.236963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.237994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.238844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.239998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.240951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.241965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.242985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.061 [2024-05-14 23:55:31.243032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.243797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.244953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.062 [2024-05-14 23:55:31.245634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.245985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.246958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.247951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.248968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.249948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.250003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.250049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.250094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.250144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.062 [2024-05-14 23:55:31.250194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.250981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.251958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.252971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.253797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.254997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.063 [2024-05-14 23:55:31.255421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.255965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.256974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.257991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.258801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.259995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.064 [2024-05-14 23:55:31.260635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.260995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.261969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.262962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.263918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.065 [2024-05-14 23:55:31.264640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.264970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.265957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.266985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.267975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.268936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.066 [2024-05-14 23:55:31.269711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.269995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.270992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.271898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.272990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.273988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.067 [2024-05-14 23:55:31.274853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.274905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.274997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.275970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.276882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.277977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.278962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.068 [2024-05-14 23:55:31.279892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.279963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.280951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.281997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.282989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.283998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.069 [2024-05-14 23:55:31.284374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.284974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.285979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.286993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.287981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.288977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.070 [2024-05-14 23:55:31.289460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.289964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.290950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.291785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.292995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.293985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.294031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.294074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.071 [2024-05-14 23:55:31.294117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.294954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.295989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.296924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.072 [2024-05-14 23:55:31.297011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.297992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.298983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.072 [2024-05-14 23:55:31.299279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.299994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.300970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.301997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.302956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.303971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.073 [2024-05-14 23:55:31.304342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.304896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.305992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.306984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.307958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.074 [2024-05-14 23:55:31.308721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.308764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.308805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.308854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.308899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.308966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.309877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.310980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.311988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.312961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.075 [2024-05-14 23:55:31.313884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.313948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.313994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.314815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.315955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.316939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.317927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.318976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.319025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.319074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.076 [2024-05-14 23:55:31.319118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.319991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.320990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.321970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.322958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.323989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.077 [2024-05-14 23:55:31.324039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.324976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.325999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.326992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.327952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.078 [2024-05-14 23:55:31.328571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.328987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.329995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.330975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.331973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.332991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.079 [2024-05-14 23:55:31.333658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.333955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.334956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.335958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.336977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.080 [2024-05-14 23:55:31.337925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.337984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.338981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.348 [2024-05-14 23:55:31.339554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.339876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.340988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.341997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.342989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.343984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.349 [2024-05-14 23:55:31.344723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.344769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.344811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.344858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.344900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.345968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.346991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.347954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.350 [2024-05-14 23:55:31.348508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.348992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.350 [2024-05-14 23:55:31.349915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.349971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.350969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.351971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.352983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.353278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.351 [2024-05-14 23:55:31.591131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.591980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.351 [2024-05-14 23:55:31.592367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.592965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.593975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.594973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.595973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.596998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.352 [2024-05-14 23:55:31.597580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.597624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.597671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.597715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.597924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.597981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.598968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.599980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.600967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.353 [2024-05-14 23:55:31.601716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.601761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.601805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.601849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.601891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.601955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.602766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:02.354 [2024-05-14 23:55:31.602964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.603972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.604952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.605958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.354 [2024-05-14 23:55:31.606973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.607823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.608969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:02.355 [2024-05-14 23:55:31.609400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:02.355 [2024-05-14 23:55:31.609534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.609984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.610980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.611980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.612026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.612073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.612124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.355 [2024-05-14 23:55:31.612169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.612959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.613984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.614951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.615991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.616964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.356 [2024-05-14 23:55:31.617416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.617999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.618961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.619957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.620967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.357 [2024-05-14 23:55:31.621852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.621894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.621961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.622994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.623955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.624904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.625979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.626969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.358 [2024-05-14 23:55:31.627360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.627964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.628982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.629974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.630983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.631873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.359 [2024-05-14 23:55:31.632829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.632876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.632923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.632979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.633986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.634952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.635995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.360 [2024-05-14 23:55:31.636720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.619 true 00:09:02.619 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:02.619 23:55:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.877 [2024-05-14 23:55:32.111441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.111857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.112080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 [2024-05-14 23:55:32.112131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:02.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.877 23:55:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.135 23:55:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:03.135 23:55:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:03.393 true 00:09:03.393 23:55:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:03.393 23:55:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.322 23:55:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.322 23:55:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:04.322 23:55:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:04.579 true 00:09:04.579 23:55:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:04.579 23:55:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.512 23:55:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.770 23:55:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:05.770 23:55:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:05.770 true 00:09:05.770 23:55:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:05.770 23:55:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.701 23:55:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.958 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:06.958 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:07.215 true 00:09:07.215 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:07.215 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.472 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.729 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:07.729 23:55:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:07.987 true 00:09:07.987 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:07.987 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.245 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.502 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:08.502 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:08.760 true 00:09:08.760 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:08.760 23:55:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.760 Initializing NVMe Controllers 00:09:08.760 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:08.760 Controller IO queue size 128, less than required. 00:09:08.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.760 Controller IO queue size 128, less than required. 00:09:08.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.760 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:08.760 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:08.760 Initialization complete. Launching workers. 00:09:08.760 ======================================================== 00:09:08.760 Latency(us) 00:09:08.760 Device Information : IOPS MiB/s Average min max 00:09:08.760 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7080.44 3.46 15110.73 1119.90 1173734.92 00:09:08.760 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24988.16 12.20 5122.42 2746.04 366436.36 00:09:08.760 ======================================================== 00:09:08.760 Total : 32068.60 15.66 7327.74 1119.90 1173734.92 00:09:08.760 00:09:09.018 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.018 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:09.018 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:09.276 true 00:09:09.276 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 459204 00:09:09.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (459204) - No such process 00:09:09.276 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 459204 00:09:09.276 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.534 23:55:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.791 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:09.791 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:09.791 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:09.791 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:09.791 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:10.049 null0 00:09:10.049 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.049 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.049 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:10.307 null1 00:09:10.307 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.307 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.307 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:10.566 null2 00:09:10.566 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.566 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.566 23:55:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:10.823 null3 00:09:10.823 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.823 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.823 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:11.081 null4 00:09:11.081 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.081 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.081 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:11.338 null5 00:09:11.339 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.339 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.339 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:11.596 null6 00:09:11.596 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.596 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.596 23:55:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:11.855 null7 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 463536 463537 463539 463541 463543 463545 463547 463549 00:09:11.855 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.856 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.114 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.373 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:12.631 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.632 23:55:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.890 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.148 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.407 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.665 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.665 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.665 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.665 23:55:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.665 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.665 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.665 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.665 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.924 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.185 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.185 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.186 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.445 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.703 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.704 23:55:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.962 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.221 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.479 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.737 23:55:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.995 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.253 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.254 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.512 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.770 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.029 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:17.287 rmmod nvme_rdma 00:09:17.287 rmmod nvme_fabrics 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 458789 ']' 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 458789 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 458789 ']' 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 458789 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:17.287 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 458789 00:09:17.545 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:17.545 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:17.545 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 458789' 00:09:17.545 killing process with pid 458789 00:09:17.545 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 458789 00:09:17.545 [2024-05-14 23:55:46.636375] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:17.545 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 458789 00:09:17.545 [2024-05-14 23:55:46.705909] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:17.804 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.804 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:17.804 00:09:17.804 real 0m47.227s 00:09:17.804 user 3m43.006s 00:09:17.804 sys 0m12.046s 00:09:17.804 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.804 23:55:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.804 ************************************ 00:09:17.804 END TEST nvmf_ns_hotplug_stress 00:09:17.804 ************************************ 00:09:17.804 23:55:47 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:17.804 23:55:47 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:17.804 23:55:47 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.804 23:55:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:17.804 ************************************ 00:09:17.804 START TEST nvmf_connect_stress 00:09:17.804 ************************************ 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:17.804 * Looking for test storage... 00:09:17.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.804 23:55:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.805 23:55:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:20.335 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:20.335 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:20.335 Found net devices under 0000:09:00.0: mlx_0_0 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:20.335 Found net devices under 0000:09:00.1: mlx_0_1 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:20.335 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:20.336 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.336 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:20.336 altname enp9s0f0np0 00:09:20.336 inet 192.168.100.8/24 scope global mlx_0_0 00:09:20.336 valid_lft forever preferred_lft forever 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:20.336 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:20.336 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:20.336 altname enp9s0f1np1 00:09:20.336 inet 192.168.100.9/24 scope global mlx_0_1 00:09:20.336 valid_lft forever preferred_lft forever 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:20.336 192.168.100.9' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:20.336 192.168.100.9' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:20.336 192.168.100.9' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=466317 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 466317 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 466317 ']' 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:20.336 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.336 [2024-05-14 23:55:49.626971] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:09:20.336 [2024-05-14 23:55:49.627065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.336 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.594 [2024-05-14 23:55:49.698513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.594 [2024-05-14 23:55:49.807617] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.594 [2024-05-14 23:55:49.807675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.594 [2024-05-14 23:55:49.807703] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.594 [2024-05-14 23:55:49.807715] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.594 [2024-05-14 23:55:49.807724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.594 [2024-05-14 23:55:49.807808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.594 [2024-05-14 23:55:49.807837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.594 [2024-05-14 23:55:49.807839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.595 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:20.595 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:09:20.595 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.595 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.595 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 23:55:49 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.853 23:55:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:20.853 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.853 23:55:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 [2024-05-14 23:55:49.978730] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2137160/0x213b650) succeed. 00:09:20.853 [2024-05-14 23:55:49.989371] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2138700/0x217cce0) succeed. 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 [2024-05-14 23:55:50.132590] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:20.853 [2024-05-14 23:55:50.132877] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.853 NULL1 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=466461 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.853 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.418 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.418 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:21.418 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.418 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.418 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.676 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.676 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:21.676 23:55:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.676 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.676 23:55:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.935 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.935 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:21.935 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.935 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.935 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.192 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.192 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:22.192 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.192 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.192 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.757 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.757 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:22.757 23:55:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.757 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.757 23:55:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.014 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.014 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:23.014 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.014 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.014 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.270 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.270 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:23.270 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.270 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.270 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.527 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.527 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:23.527 23:55:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.527 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.527 23:55:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.784 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.784 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:23.784 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.784 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.784 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.348 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.348 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:24.348 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.348 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.348 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.606 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.606 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:24.606 23:55:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.606 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.606 23:55:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.863 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.863 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:24.863 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.863 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.863 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.136 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.136 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:25.136 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.136 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.136 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.410 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.410 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:25.410 23:55:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.410 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.410 23:55:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.974 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.974 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:25.974 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.974 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.974 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.231 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.231 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:26.231 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.231 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.231 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.488 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:26.488 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.488 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.488 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.745 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.745 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:26.745 23:55:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.745 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.745 23:55:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.002 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.002 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:27.002 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.002 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.002 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.566 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.566 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:27.566 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.566 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.566 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.824 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.824 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:27.824 23:55:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.824 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.824 23:55:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.081 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.081 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:28.081 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.081 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.081 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.338 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.338 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:28.338 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.338 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.338 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.596 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.596 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:28.596 23:55:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.596 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.596 23:55:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.161 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.162 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:29.162 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.162 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.162 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.419 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.419 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:29.419 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.419 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.419 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.676 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.676 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:29.676 23:55:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.676 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.676 23:55:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.933 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.933 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:29.933 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.933 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.933 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.499 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.499 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:30.499 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.499 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.499 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.757 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.757 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:30.757 23:55:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.757 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.757 23:55:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.014 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.014 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:31.014 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.014 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.014 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.014 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 466461 00:09:31.272 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (466461) - No such process 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 466461 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:31.272 rmmod nvme_rdma 00:09:31.272 rmmod nvme_fabrics 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 466317 ']' 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 466317 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 466317 ']' 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 466317 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 466317 00:09:31.272 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:31.273 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:31.273 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 466317' 00:09:31.273 killing process with pid 466317 00:09:31.273 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 466317 00:09:31.273 [2024-05-14 23:56:00.573840] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:31.273 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 466317 00:09:31.530 [2024-05-14 23:56:00.643922] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:31.797 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.797 23:56:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:31.797 00:09:31.797 real 0m13.856s 00:09:31.797 user 0m39.635s 00:09:31.797 sys 0m4.084s 00:09:31.797 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.797 23:56:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 ************************************ 00:09:31.797 END TEST nvmf_connect_stress 00:09:31.797 ************************************ 00:09:31.797 23:56:00 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:31.797 23:56:00 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:31.797 23:56:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.797 23:56:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 ************************************ 00:09:31.797 START TEST nvmf_fused_ordering 00:09:31.797 ************************************ 00:09:31.797 23:56:00 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:31.797 * Looking for test storage... 00:09:31.797 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.797 23:56:01 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.798 23:56:01 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:34.331 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:34.331 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.331 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:34.332 Found net devices under 0000:09:00.0: mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:34.332 Found net devices under 0000:09:00.1: mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:34.332 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.332 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:34.332 altname enp9s0f0np0 00:09:34.332 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.332 valid_lft forever preferred_lft forever 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:34.332 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.332 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:34.332 altname enp9s0f1np1 00:09:34.332 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.332 valid_lft forever preferred_lft forever 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.332 192.168.100.9' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:34.332 192.168.100.9' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:34.332 192.168.100.9' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=469756 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 469756 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 469756 ']' 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:34.332 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.332 [2024-05-14 23:56:03.548088] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:09:34.332 [2024-05-14 23:56:03.548164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.332 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.332 [2024-05-14 23:56:03.620315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.590 [2024-05-14 23:56:03.743669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.590 [2024-05-14 23:56:03.743736] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.590 [2024-05-14 23:56:03.743752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.590 [2024-05-14 23:56:03.743766] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.590 [2024-05-14 23:56:03.743777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.590 [2024-05-14 23:56:03.743815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.590 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.590 [2024-05-14 23:56:03.924438] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b8bd0/0x11bd0c0) succeed. 00:09:34.590 [2024-05-14 23:56:03.936251] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11ba0d0/0x11fe750) succeed. 00:09:34.848 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.848 23:56:03 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:34.848 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.848 23:56:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 [2024-05-14 23:56:04.005007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:34.848 [2024-05-14 23:56:04.005315] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 NULL1 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.849 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:34.849 [2024-05-14 23:56:04.049750] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:09:34.849 [2024-05-14 23:56:04.049793] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469780 ] 00:09:34.849 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.107 Attached to nqn.2016-06.io.spdk:cnode1 00:09:35.107 Namespace ID: 1 size: 1GB 00:09:35.107 fused_ordering(0) 00:09:35.107 fused_ordering(1) 00:09:35.107 fused_ordering(2) 00:09:35.107 fused_ordering(3) 00:09:35.107 fused_ordering(4) 00:09:35.107 fused_ordering(5) 00:09:35.107 fused_ordering(6) 00:09:35.107 fused_ordering(7) 00:09:35.107 fused_ordering(8) 00:09:35.107 fused_ordering(9) 00:09:35.107 fused_ordering(10) 00:09:35.107 fused_ordering(11) 00:09:35.107 fused_ordering(12) 00:09:35.107 fused_ordering(13) 00:09:35.107 fused_ordering(14) 00:09:35.107 fused_ordering(15) 00:09:35.107 fused_ordering(16) 00:09:35.107 fused_ordering(17) 00:09:35.107 fused_ordering(18) 00:09:35.107 fused_ordering(19) 00:09:35.107 fused_ordering(20) 00:09:35.107 fused_ordering(21) 00:09:35.107 fused_ordering(22) 00:09:35.107 fused_ordering(23) 00:09:35.107 fused_ordering(24) 00:09:35.107 fused_ordering(25) 00:09:35.107 fused_ordering(26) 00:09:35.107 fused_ordering(27) 00:09:35.107 fused_ordering(28) 00:09:35.107 fused_ordering(29) 00:09:35.107 fused_ordering(30) 00:09:35.107 fused_ordering(31) 00:09:35.107 fused_ordering(32) 00:09:35.107 fused_ordering(33) 00:09:35.107 fused_ordering(34) 00:09:35.107 fused_ordering(35) 00:09:35.107 fused_ordering(36) 00:09:35.107 fused_ordering(37) 00:09:35.107 fused_ordering(38) 00:09:35.107 fused_ordering(39) 00:09:35.107 fused_ordering(40) 00:09:35.107 fused_ordering(41) 00:09:35.107 fused_ordering(42) 00:09:35.107 fused_ordering(43) 00:09:35.107 fused_ordering(44) 00:09:35.107 fused_ordering(45) 00:09:35.107 fused_ordering(46) 00:09:35.107 fused_ordering(47) 00:09:35.107 fused_ordering(48) 00:09:35.107 fused_ordering(49) 00:09:35.107 fused_ordering(50) 00:09:35.107 fused_ordering(51) 00:09:35.107 fused_ordering(52) 00:09:35.107 fused_ordering(53) 00:09:35.107 fused_ordering(54) 00:09:35.107 fused_ordering(55) 00:09:35.107 fused_ordering(56) 00:09:35.107 fused_ordering(57) 00:09:35.107 fused_ordering(58) 00:09:35.107 fused_ordering(59) 00:09:35.107 fused_ordering(60) 00:09:35.107 fused_ordering(61) 00:09:35.107 fused_ordering(62) 00:09:35.107 fused_ordering(63) 00:09:35.107 fused_ordering(64) 00:09:35.107 fused_ordering(65) 00:09:35.107 fused_ordering(66) 00:09:35.107 fused_ordering(67) 00:09:35.107 fused_ordering(68) 00:09:35.107 fused_ordering(69) 00:09:35.107 fused_ordering(70) 00:09:35.107 fused_ordering(71) 00:09:35.107 fused_ordering(72) 00:09:35.107 fused_ordering(73) 00:09:35.107 fused_ordering(74) 00:09:35.107 fused_ordering(75) 00:09:35.107 fused_ordering(76) 00:09:35.107 fused_ordering(77) 00:09:35.107 fused_ordering(78) 00:09:35.107 fused_ordering(79) 00:09:35.107 fused_ordering(80) 00:09:35.107 fused_ordering(81) 00:09:35.107 fused_ordering(82) 00:09:35.107 fused_ordering(83) 00:09:35.107 fused_ordering(84) 00:09:35.107 fused_ordering(85) 00:09:35.107 fused_ordering(86) 00:09:35.107 fused_ordering(87) 00:09:35.107 fused_ordering(88) 00:09:35.107 fused_ordering(89) 00:09:35.107 fused_ordering(90) 00:09:35.107 fused_ordering(91) 00:09:35.107 fused_ordering(92) 00:09:35.107 fused_ordering(93) 00:09:35.107 fused_ordering(94) 00:09:35.107 fused_ordering(95) 00:09:35.107 fused_ordering(96) 00:09:35.107 fused_ordering(97) 00:09:35.107 fused_ordering(98) 00:09:35.107 fused_ordering(99) 00:09:35.107 fused_ordering(100) 00:09:35.107 fused_ordering(101) 00:09:35.107 fused_ordering(102) 00:09:35.107 fused_ordering(103) 00:09:35.107 fused_ordering(104) 00:09:35.107 fused_ordering(105) 00:09:35.107 fused_ordering(106) 00:09:35.107 fused_ordering(107) 00:09:35.107 fused_ordering(108) 00:09:35.107 fused_ordering(109) 00:09:35.107 fused_ordering(110) 00:09:35.107 fused_ordering(111) 00:09:35.107 fused_ordering(112) 00:09:35.107 fused_ordering(113) 00:09:35.107 fused_ordering(114) 00:09:35.107 fused_ordering(115) 00:09:35.107 fused_ordering(116) 00:09:35.107 fused_ordering(117) 00:09:35.107 fused_ordering(118) 00:09:35.107 fused_ordering(119) 00:09:35.107 fused_ordering(120) 00:09:35.107 fused_ordering(121) 00:09:35.107 fused_ordering(122) 00:09:35.107 fused_ordering(123) 00:09:35.107 fused_ordering(124) 00:09:35.107 fused_ordering(125) 00:09:35.107 fused_ordering(126) 00:09:35.107 fused_ordering(127) 00:09:35.107 fused_ordering(128) 00:09:35.107 fused_ordering(129) 00:09:35.107 fused_ordering(130) 00:09:35.107 fused_ordering(131) 00:09:35.107 fused_ordering(132) 00:09:35.107 fused_ordering(133) 00:09:35.107 fused_ordering(134) 00:09:35.107 fused_ordering(135) 00:09:35.107 fused_ordering(136) 00:09:35.107 fused_ordering(137) 00:09:35.107 fused_ordering(138) 00:09:35.107 fused_ordering(139) 00:09:35.107 fused_ordering(140) 00:09:35.107 fused_ordering(141) 00:09:35.107 fused_ordering(142) 00:09:35.107 fused_ordering(143) 00:09:35.107 fused_ordering(144) 00:09:35.107 fused_ordering(145) 00:09:35.108 fused_ordering(146) 00:09:35.108 fused_ordering(147) 00:09:35.108 fused_ordering(148) 00:09:35.108 fused_ordering(149) 00:09:35.108 fused_ordering(150) 00:09:35.108 fused_ordering(151) 00:09:35.108 fused_ordering(152) 00:09:35.108 fused_ordering(153) 00:09:35.108 fused_ordering(154) 00:09:35.108 fused_ordering(155) 00:09:35.108 fused_ordering(156) 00:09:35.108 fused_ordering(157) 00:09:35.108 fused_ordering(158) 00:09:35.108 fused_ordering(159) 00:09:35.108 fused_ordering(160) 00:09:35.108 fused_ordering(161) 00:09:35.108 fused_ordering(162) 00:09:35.108 fused_ordering(163) 00:09:35.108 fused_ordering(164) 00:09:35.108 fused_ordering(165) 00:09:35.108 fused_ordering(166) 00:09:35.108 fused_ordering(167) 00:09:35.108 fused_ordering(168) 00:09:35.108 fused_ordering(169) 00:09:35.108 fused_ordering(170) 00:09:35.108 fused_ordering(171) 00:09:35.108 fused_ordering(172) 00:09:35.108 fused_ordering(173) 00:09:35.108 fused_ordering(174) 00:09:35.108 fused_ordering(175) 00:09:35.108 fused_ordering(176) 00:09:35.108 fused_ordering(177) 00:09:35.108 fused_ordering(178) 00:09:35.108 fused_ordering(179) 00:09:35.108 fused_ordering(180) 00:09:35.108 fused_ordering(181) 00:09:35.108 fused_ordering(182) 00:09:35.108 fused_ordering(183) 00:09:35.108 fused_ordering(184) 00:09:35.108 fused_ordering(185) 00:09:35.108 fused_ordering(186) 00:09:35.108 fused_ordering(187) 00:09:35.108 fused_ordering(188) 00:09:35.108 fused_ordering(189) 00:09:35.108 fused_ordering(190) 00:09:35.108 fused_ordering(191) 00:09:35.108 fused_ordering(192) 00:09:35.108 fused_ordering(193) 00:09:35.108 fused_ordering(194) 00:09:35.108 fused_ordering(195) 00:09:35.108 fused_ordering(196) 00:09:35.108 fused_ordering(197) 00:09:35.108 fused_ordering(198) 00:09:35.108 fused_ordering(199) 00:09:35.108 fused_ordering(200) 00:09:35.108 fused_ordering(201) 00:09:35.108 fused_ordering(202) 00:09:35.108 fused_ordering(203) 00:09:35.108 fused_ordering(204) 00:09:35.108 fused_ordering(205) 00:09:35.108 fused_ordering(206) 00:09:35.108 fused_ordering(207) 00:09:35.108 fused_ordering(208) 00:09:35.108 fused_ordering(209) 00:09:35.108 fused_ordering(210) 00:09:35.108 fused_ordering(211) 00:09:35.108 fused_ordering(212) 00:09:35.108 fused_ordering(213) 00:09:35.108 fused_ordering(214) 00:09:35.108 fused_ordering(215) 00:09:35.108 fused_ordering(216) 00:09:35.108 fused_ordering(217) 00:09:35.108 fused_ordering(218) 00:09:35.108 fused_ordering(219) 00:09:35.108 fused_ordering(220) 00:09:35.108 fused_ordering(221) 00:09:35.108 fused_ordering(222) 00:09:35.108 fused_ordering(223) 00:09:35.108 fused_ordering(224) 00:09:35.108 fused_ordering(225) 00:09:35.108 fused_ordering(226) 00:09:35.108 fused_ordering(227) 00:09:35.108 fused_ordering(228) 00:09:35.108 fused_ordering(229) 00:09:35.108 fused_ordering(230) 00:09:35.108 fused_ordering(231) 00:09:35.108 fused_ordering(232) 00:09:35.108 fused_ordering(233) 00:09:35.108 fused_ordering(234) 00:09:35.108 fused_ordering(235) 00:09:35.108 fused_ordering(236) 00:09:35.108 fused_ordering(237) 00:09:35.108 fused_ordering(238) 00:09:35.108 fused_ordering(239) 00:09:35.108 fused_ordering(240) 00:09:35.108 fused_ordering(241) 00:09:35.108 fused_ordering(242) 00:09:35.108 fused_ordering(243) 00:09:35.108 fused_ordering(244) 00:09:35.108 fused_ordering(245) 00:09:35.108 fused_ordering(246) 00:09:35.108 fused_ordering(247) 00:09:35.108 fused_ordering(248) 00:09:35.108 fused_ordering(249) 00:09:35.108 fused_ordering(250) 00:09:35.108 fused_ordering(251) 00:09:35.108 fused_ordering(252) 00:09:35.108 fused_ordering(253) 00:09:35.108 fused_ordering(254) 00:09:35.108 fused_ordering(255) 00:09:35.108 fused_ordering(256) 00:09:35.108 fused_ordering(257) 00:09:35.108 fused_ordering(258) 00:09:35.108 fused_ordering(259) 00:09:35.108 fused_ordering(260) 00:09:35.108 fused_ordering(261) 00:09:35.108 fused_ordering(262) 00:09:35.108 fused_ordering(263) 00:09:35.108 fused_ordering(264) 00:09:35.108 fused_ordering(265) 00:09:35.108 fused_ordering(266) 00:09:35.108 fused_ordering(267) 00:09:35.108 fused_ordering(268) 00:09:35.108 fused_ordering(269) 00:09:35.108 fused_ordering(270) 00:09:35.108 fused_ordering(271) 00:09:35.108 fused_ordering(272) 00:09:35.108 fused_ordering(273) 00:09:35.108 fused_ordering(274) 00:09:35.108 fused_ordering(275) 00:09:35.108 fused_ordering(276) 00:09:35.108 fused_ordering(277) 00:09:35.108 fused_ordering(278) 00:09:35.108 fused_ordering(279) 00:09:35.108 fused_ordering(280) 00:09:35.108 fused_ordering(281) 00:09:35.108 fused_ordering(282) 00:09:35.108 fused_ordering(283) 00:09:35.108 fused_ordering(284) 00:09:35.108 fused_ordering(285) 00:09:35.108 fused_ordering(286) 00:09:35.108 fused_ordering(287) 00:09:35.108 fused_ordering(288) 00:09:35.108 fused_ordering(289) 00:09:35.108 fused_ordering(290) 00:09:35.108 fused_ordering(291) 00:09:35.108 fused_ordering(292) 00:09:35.108 fused_ordering(293) 00:09:35.108 fused_ordering(294) 00:09:35.108 fused_ordering(295) 00:09:35.108 fused_ordering(296) 00:09:35.108 fused_ordering(297) 00:09:35.108 fused_ordering(298) 00:09:35.108 fused_ordering(299) 00:09:35.108 fused_ordering(300) 00:09:35.108 fused_ordering(301) 00:09:35.108 fused_ordering(302) 00:09:35.108 fused_ordering(303) 00:09:35.108 fused_ordering(304) 00:09:35.108 fused_ordering(305) 00:09:35.108 fused_ordering(306) 00:09:35.108 fused_ordering(307) 00:09:35.108 fused_ordering(308) 00:09:35.108 fused_ordering(309) 00:09:35.108 fused_ordering(310) 00:09:35.108 fused_ordering(311) 00:09:35.108 fused_ordering(312) 00:09:35.108 fused_ordering(313) 00:09:35.108 fused_ordering(314) 00:09:35.108 fused_ordering(315) 00:09:35.108 fused_ordering(316) 00:09:35.108 fused_ordering(317) 00:09:35.108 fused_ordering(318) 00:09:35.108 fused_ordering(319) 00:09:35.108 fused_ordering(320) 00:09:35.108 fused_ordering(321) 00:09:35.108 fused_ordering(322) 00:09:35.108 fused_ordering(323) 00:09:35.108 fused_ordering(324) 00:09:35.108 fused_ordering(325) 00:09:35.108 fused_ordering(326) 00:09:35.108 fused_ordering(327) 00:09:35.108 fused_ordering(328) 00:09:35.108 fused_ordering(329) 00:09:35.108 fused_ordering(330) 00:09:35.108 fused_ordering(331) 00:09:35.108 fused_ordering(332) 00:09:35.108 fused_ordering(333) 00:09:35.108 fused_ordering(334) 00:09:35.108 fused_ordering(335) 00:09:35.108 fused_ordering(336) 00:09:35.108 fused_ordering(337) 00:09:35.108 fused_ordering(338) 00:09:35.108 fused_ordering(339) 00:09:35.108 fused_ordering(340) 00:09:35.108 fused_ordering(341) 00:09:35.108 fused_ordering(342) 00:09:35.108 fused_ordering(343) 00:09:35.108 fused_ordering(344) 00:09:35.108 fused_ordering(345) 00:09:35.108 fused_ordering(346) 00:09:35.108 fused_ordering(347) 00:09:35.108 fused_ordering(348) 00:09:35.108 fused_ordering(349) 00:09:35.108 fused_ordering(350) 00:09:35.108 fused_ordering(351) 00:09:35.108 fused_ordering(352) 00:09:35.108 fused_ordering(353) 00:09:35.108 fused_ordering(354) 00:09:35.108 fused_ordering(355) 00:09:35.108 fused_ordering(356) 00:09:35.108 fused_ordering(357) 00:09:35.108 fused_ordering(358) 00:09:35.108 fused_ordering(359) 00:09:35.108 fused_ordering(360) 00:09:35.108 fused_ordering(361) 00:09:35.108 fused_ordering(362) 00:09:35.108 fused_ordering(363) 00:09:35.108 fused_ordering(364) 00:09:35.108 fused_ordering(365) 00:09:35.108 fused_ordering(366) 00:09:35.108 fused_ordering(367) 00:09:35.108 fused_ordering(368) 00:09:35.108 fused_ordering(369) 00:09:35.108 fused_ordering(370) 00:09:35.108 fused_ordering(371) 00:09:35.108 fused_ordering(372) 00:09:35.108 fused_ordering(373) 00:09:35.108 fused_ordering(374) 00:09:35.108 fused_ordering(375) 00:09:35.108 fused_ordering(376) 00:09:35.108 fused_ordering(377) 00:09:35.108 fused_ordering(378) 00:09:35.108 fused_ordering(379) 00:09:35.108 fused_ordering(380) 00:09:35.108 fused_ordering(381) 00:09:35.108 fused_ordering(382) 00:09:35.108 fused_ordering(383) 00:09:35.108 fused_ordering(384) 00:09:35.108 fused_ordering(385) 00:09:35.108 fused_ordering(386) 00:09:35.108 fused_ordering(387) 00:09:35.108 fused_ordering(388) 00:09:35.108 fused_ordering(389) 00:09:35.108 fused_ordering(390) 00:09:35.108 fused_ordering(391) 00:09:35.108 fused_ordering(392) 00:09:35.108 fused_ordering(393) 00:09:35.108 fused_ordering(394) 00:09:35.108 fused_ordering(395) 00:09:35.108 fused_ordering(396) 00:09:35.108 fused_ordering(397) 00:09:35.108 fused_ordering(398) 00:09:35.108 fused_ordering(399) 00:09:35.108 fused_ordering(400) 00:09:35.108 fused_ordering(401) 00:09:35.108 fused_ordering(402) 00:09:35.108 fused_ordering(403) 00:09:35.108 fused_ordering(404) 00:09:35.108 fused_ordering(405) 00:09:35.108 fused_ordering(406) 00:09:35.108 fused_ordering(407) 00:09:35.108 fused_ordering(408) 00:09:35.108 fused_ordering(409) 00:09:35.108 fused_ordering(410) 00:09:35.366 fused_ordering(411) 00:09:35.366 fused_ordering(412) 00:09:35.366 fused_ordering(413) 00:09:35.366 fused_ordering(414) 00:09:35.366 fused_ordering(415) 00:09:35.366 fused_ordering(416) 00:09:35.366 fused_ordering(417) 00:09:35.366 fused_ordering(418) 00:09:35.366 fused_ordering(419) 00:09:35.366 fused_ordering(420) 00:09:35.366 fused_ordering(421) 00:09:35.366 fused_ordering(422) 00:09:35.366 fused_ordering(423) 00:09:35.366 fused_ordering(424) 00:09:35.366 fused_ordering(425) 00:09:35.366 fused_ordering(426) 00:09:35.366 fused_ordering(427) 00:09:35.366 fused_ordering(428) 00:09:35.366 fused_ordering(429) 00:09:35.366 fused_ordering(430) 00:09:35.366 fused_ordering(431) 00:09:35.366 fused_ordering(432) 00:09:35.366 fused_ordering(433) 00:09:35.366 fused_ordering(434) 00:09:35.366 fused_ordering(435) 00:09:35.366 fused_ordering(436) 00:09:35.366 fused_ordering(437) 00:09:35.366 fused_ordering(438) 00:09:35.366 fused_ordering(439) 00:09:35.366 fused_ordering(440) 00:09:35.366 fused_ordering(441) 00:09:35.366 fused_ordering(442) 00:09:35.366 fused_ordering(443) 00:09:35.366 fused_ordering(444) 00:09:35.366 fused_ordering(445) 00:09:35.366 fused_ordering(446) 00:09:35.366 fused_ordering(447) 00:09:35.366 fused_ordering(448) 00:09:35.366 fused_ordering(449) 00:09:35.366 fused_ordering(450) 00:09:35.366 fused_ordering(451) 00:09:35.366 fused_ordering(452) 00:09:35.366 fused_ordering(453) 00:09:35.366 fused_ordering(454) 00:09:35.366 fused_ordering(455) 00:09:35.366 fused_ordering(456) 00:09:35.366 fused_ordering(457) 00:09:35.366 fused_ordering(458) 00:09:35.366 fused_ordering(459) 00:09:35.366 fused_ordering(460) 00:09:35.366 fused_ordering(461) 00:09:35.366 fused_ordering(462) 00:09:35.366 fused_ordering(463) 00:09:35.366 fused_ordering(464) 00:09:35.366 fused_ordering(465) 00:09:35.366 fused_ordering(466) 00:09:35.366 fused_ordering(467) 00:09:35.366 fused_ordering(468) 00:09:35.366 fused_ordering(469) 00:09:35.366 fused_ordering(470) 00:09:35.366 fused_ordering(471) 00:09:35.366 fused_ordering(472) 00:09:35.366 fused_ordering(473) 00:09:35.366 fused_ordering(474) 00:09:35.366 fused_ordering(475) 00:09:35.366 fused_ordering(476) 00:09:35.366 fused_ordering(477) 00:09:35.366 fused_ordering(478) 00:09:35.366 fused_ordering(479) 00:09:35.366 fused_ordering(480) 00:09:35.366 fused_ordering(481) 00:09:35.366 fused_ordering(482) 00:09:35.366 fused_ordering(483) 00:09:35.366 fused_ordering(484) 00:09:35.366 fused_ordering(485) 00:09:35.366 fused_ordering(486) 00:09:35.366 fused_ordering(487) 00:09:35.366 fused_ordering(488) 00:09:35.366 fused_ordering(489) 00:09:35.366 fused_ordering(490) 00:09:35.366 fused_ordering(491) 00:09:35.366 fused_ordering(492) 00:09:35.366 fused_ordering(493) 00:09:35.366 fused_ordering(494) 00:09:35.366 fused_ordering(495) 00:09:35.366 fused_ordering(496) 00:09:35.366 fused_ordering(497) 00:09:35.366 fused_ordering(498) 00:09:35.366 fused_ordering(499) 00:09:35.366 fused_ordering(500) 00:09:35.366 fused_ordering(501) 00:09:35.366 fused_ordering(502) 00:09:35.366 fused_ordering(503) 00:09:35.366 fused_ordering(504) 00:09:35.366 fused_ordering(505) 00:09:35.366 fused_ordering(506) 00:09:35.366 fused_ordering(507) 00:09:35.366 fused_ordering(508) 00:09:35.366 fused_ordering(509) 00:09:35.366 fused_ordering(510) 00:09:35.366 fused_ordering(511) 00:09:35.366 fused_ordering(512) 00:09:35.366 fused_ordering(513) 00:09:35.366 fused_ordering(514) 00:09:35.366 fused_ordering(515) 00:09:35.366 fused_ordering(516) 00:09:35.366 fused_ordering(517) 00:09:35.366 fused_ordering(518) 00:09:35.366 fused_ordering(519) 00:09:35.366 fused_ordering(520) 00:09:35.366 fused_ordering(521) 00:09:35.366 fused_ordering(522) 00:09:35.366 fused_ordering(523) 00:09:35.366 fused_ordering(524) 00:09:35.366 fused_ordering(525) 00:09:35.366 fused_ordering(526) 00:09:35.366 fused_ordering(527) 00:09:35.366 fused_ordering(528) 00:09:35.366 fused_ordering(529) 00:09:35.366 fused_ordering(530) 00:09:35.366 fused_ordering(531) 00:09:35.366 fused_ordering(532) 00:09:35.366 fused_ordering(533) 00:09:35.366 fused_ordering(534) 00:09:35.366 fused_ordering(535) 00:09:35.366 fused_ordering(536) 00:09:35.366 fused_ordering(537) 00:09:35.366 fused_ordering(538) 00:09:35.366 fused_ordering(539) 00:09:35.366 fused_ordering(540) 00:09:35.366 fused_ordering(541) 00:09:35.366 fused_ordering(542) 00:09:35.366 fused_ordering(543) 00:09:35.366 fused_ordering(544) 00:09:35.366 fused_ordering(545) 00:09:35.366 fused_ordering(546) 00:09:35.366 fused_ordering(547) 00:09:35.366 fused_ordering(548) 00:09:35.366 fused_ordering(549) 00:09:35.366 fused_ordering(550) 00:09:35.366 fused_ordering(551) 00:09:35.366 fused_ordering(552) 00:09:35.366 fused_ordering(553) 00:09:35.366 fused_ordering(554) 00:09:35.366 fused_ordering(555) 00:09:35.366 fused_ordering(556) 00:09:35.366 fused_ordering(557) 00:09:35.366 fused_ordering(558) 00:09:35.366 fused_ordering(559) 00:09:35.366 fused_ordering(560) 00:09:35.366 fused_ordering(561) 00:09:35.366 fused_ordering(562) 00:09:35.366 fused_ordering(563) 00:09:35.366 fused_ordering(564) 00:09:35.366 fused_ordering(565) 00:09:35.366 fused_ordering(566) 00:09:35.366 fused_ordering(567) 00:09:35.366 fused_ordering(568) 00:09:35.366 fused_ordering(569) 00:09:35.366 fused_ordering(570) 00:09:35.366 fused_ordering(571) 00:09:35.366 fused_ordering(572) 00:09:35.367 fused_ordering(573) 00:09:35.367 fused_ordering(574) 00:09:35.367 fused_ordering(575) 00:09:35.367 fused_ordering(576) 00:09:35.367 fused_ordering(577) 00:09:35.367 fused_ordering(578) 00:09:35.367 fused_ordering(579) 00:09:35.367 fused_ordering(580) 00:09:35.367 fused_ordering(581) 00:09:35.367 fused_ordering(582) 00:09:35.367 fused_ordering(583) 00:09:35.367 fused_ordering(584) 00:09:35.367 fused_ordering(585) 00:09:35.367 fused_ordering(586) 00:09:35.367 fused_ordering(587) 00:09:35.367 fused_ordering(588) 00:09:35.367 fused_ordering(589) 00:09:35.367 fused_ordering(590) 00:09:35.367 fused_ordering(591) 00:09:35.367 fused_ordering(592) 00:09:35.367 fused_ordering(593) 00:09:35.367 fused_ordering(594) 00:09:35.367 fused_ordering(595) 00:09:35.367 fused_ordering(596) 00:09:35.367 fused_ordering(597) 00:09:35.367 fused_ordering(598) 00:09:35.367 fused_ordering(599) 00:09:35.367 fused_ordering(600) 00:09:35.367 fused_ordering(601) 00:09:35.367 fused_ordering(602) 00:09:35.367 fused_ordering(603) 00:09:35.367 fused_ordering(604) 00:09:35.367 fused_ordering(605) 00:09:35.367 fused_ordering(606) 00:09:35.367 fused_ordering(607) 00:09:35.367 fused_ordering(608) 00:09:35.367 fused_ordering(609) 00:09:35.367 fused_ordering(610) 00:09:35.367 fused_ordering(611) 00:09:35.367 fused_ordering(612) 00:09:35.367 fused_ordering(613) 00:09:35.367 fused_ordering(614) 00:09:35.367 fused_ordering(615) 00:09:35.367 fused_ordering(616) 00:09:35.367 fused_ordering(617) 00:09:35.367 fused_ordering(618) 00:09:35.367 fused_ordering(619) 00:09:35.367 fused_ordering(620) 00:09:35.367 fused_ordering(621) 00:09:35.367 fused_ordering(622) 00:09:35.367 fused_ordering(623) 00:09:35.367 fused_ordering(624) 00:09:35.367 fused_ordering(625) 00:09:35.367 fused_ordering(626) 00:09:35.367 fused_ordering(627) 00:09:35.367 fused_ordering(628) 00:09:35.367 fused_ordering(629) 00:09:35.367 fused_ordering(630) 00:09:35.367 fused_ordering(631) 00:09:35.367 fused_ordering(632) 00:09:35.367 fused_ordering(633) 00:09:35.367 fused_ordering(634) 00:09:35.367 fused_ordering(635) 00:09:35.367 fused_ordering(636) 00:09:35.367 fused_ordering(637) 00:09:35.367 fused_ordering(638) 00:09:35.367 fused_ordering(639) 00:09:35.367 fused_ordering(640) 00:09:35.367 fused_ordering(641) 00:09:35.367 fused_ordering(642) 00:09:35.367 fused_ordering(643) 00:09:35.367 fused_ordering(644) 00:09:35.367 fused_ordering(645) 00:09:35.367 fused_ordering(646) 00:09:35.367 fused_ordering(647) 00:09:35.367 fused_ordering(648) 00:09:35.367 fused_ordering(649) 00:09:35.367 fused_ordering(650) 00:09:35.367 fused_ordering(651) 00:09:35.367 fused_ordering(652) 00:09:35.367 fused_ordering(653) 00:09:35.367 fused_ordering(654) 00:09:35.367 fused_ordering(655) 00:09:35.367 fused_ordering(656) 00:09:35.367 fused_ordering(657) 00:09:35.367 fused_ordering(658) 00:09:35.367 fused_ordering(659) 00:09:35.367 fused_ordering(660) 00:09:35.367 fused_ordering(661) 00:09:35.367 fused_ordering(662) 00:09:35.367 fused_ordering(663) 00:09:35.367 fused_ordering(664) 00:09:35.367 fused_ordering(665) 00:09:35.367 fused_ordering(666) 00:09:35.367 fused_ordering(667) 00:09:35.367 fused_ordering(668) 00:09:35.367 fused_ordering(669) 00:09:35.367 fused_ordering(670) 00:09:35.367 fused_ordering(671) 00:09:35.367 fused_ordering(672) 00:09:35.367 fused_ordering(673) 00:09:35.367 fused_ordering(674) 00:09:35.367 fused_ordering(675) 00:09:35.367 fused_ordering(676) 00:09:35.367 fused_ordering(677) 00:09:35.367 fused_ordering(678) 00:09:35.367 fused_ordering(679) 00:09:35.367 fused_ordering(680) 00:09:35.367 fused_ordering(681) 00:09:35.367 fused_ordering(682) 00:09:35.367 fused_ordering(683) 00:09:35.367 fused_ordering(684) 00:09:35.367 fused_ordering(685) 00:09:35.367 fused_ordering(686) 00:09:35.367 fused_ordering(687) 00:09:35.367 fused_ordering(688) 00:09:35.367 fused_ordering(689) 00:09:35.367 fused_ordering(690) 00:09:35.367 fused_ordering(691) 00:09:35.367 fused_ordering(692) 00:09:35.367 fused_ordering(693) 00:09:35.367 fused_ordering(694) 00:09:35.367 fused_ordering(695) 00:09:35.367 fused_ordering(696) 00:09:35.367 fused_ordering(697) 00:09:35.367 fused_ordering(698) 00:09:35.367 fused_ordering(699) 00:09:35.367 fused_ordering(700) 00:09:35.367 fused_ordering(701) 00:09:35.367 fused_ordering(702) 00:09:35.367 fused_ordering(703) 00:09:35.367 fused_ordering(704) 00:09:35.367 fused_ordering(705) 00:09:35.367 fused_ordering(706) 00:09:35.367 fused_ordering(707) 00:09:35.367 fused_ordering(708) 00:09:35.367 fused_ordering(709) 00:09:35.367 fused_ordering(710) 00:09:35.367 fused_ordering(711) 00:09:35.367 fused_ordering(712) 00:09:35.367 fused_ordering(713) 00:09:35.367 fused_ordering(714) 00:09:35.367 fused_ordering(715) 00:09:35.367 fused_ordering(716) 00:09:35.367 fused_ordering(717) 00:09:35.367 fused_ordering(718) 00:09:35.367 fused_ordering(719) 00:09:35.367 fused_ordering(720) 00:09:35.367 fused_ordering(721) 00:09:35.367 fused_ordering(722) 00:09:35.367 fused_ordering(723) 00:09:35.367 fused_ordering(724) 00:09:35.367 fused_ordering(725) 00:09:35.367 fused_ordering(726) 00:09:35.367 fused_ordering(727) 00:09:35.367 fused_ordering(728) 00:09:35.367 fused_ordering(729) 00:09:35.367 fused_ordering(730) 00:09:35.367 fused_ordering(731) 00:09:35.367 fused_ordering(732) 00:09:35.367 fused_ordering(733) 00:09:35.367 fused_ordering(734) 00:09:35.367 fused_ordering(735) 00:09:35.367 fused_ordering(736) 00:09:35.367 fused_ordering(737) 00:09:35.367 fused_ordering(738) 00:09:35.367 fused_ordering(739) 00:09:35.367 fused_ordering(740) 00:09:35.367 fused_ordering(741) 00:09:35.367 fused_ordering(742) 00:09:35.367 fused_ordering(743) 00:09:35.367 fused_ordering(744) 00:09:35.367 fused_ordering(745) 00:09:35.367 fused_ordering(746) 00:09:35.367 fused_ordering(747) 00:09:35.367 fused_ordering(748) 00:09:35.367 fused_ordering(749) 00:09:35.367 fused_ordering(750) 00:09:35.367 fused_ordering(751) 00:09:35.367 fused_ordering(752) 00:09:35.367 fused_ordering(753) 00:09:35.367 fused_ordering(754) 00:09:35.367 fused_ordering(755) 00:09:35.367 fused_ordering(756) 00:09:35.367 fused_ordering(757) 00:09:35.367 fused_ordering(758) 00:09:35.367 fused_ordering(759) 00:09:35.367 fused_ordering(760) 00:09:35.367 fused_ordering(761) 00:09:35.367 fused_ordering(762) 00:09:35.367 fused_ordering(763) 00:09:35.367 fused_ordering(764) 00:09:35.367 fused_ordering(765) 00:09:35.367 fused_ordering(766) 00:09:35.367 fused_ordering(767) 00:09:35.367 fused_ordering(768) 00:09:35.367 fused_ordering(769) 00:09:35.367 fused_ordering(770) 00:09:35.367 fused_ordering(771) 00:09:35.367 fused_ordering(772) 00:09:35.367 fused_ordering(773) 00:09:35.367 fused_ordering(774) 00:09:35.367 fused_ordering(775) 00:09:35.367 fused_ordering(776) 00:09:35.367 fused_ordering(777) 00:09:35.367 fused_ordering(778) 00:09:35.367 fused_ordering(779) 00:09:35.367 fused_ordering(780) 00:09:35.367 fused_ordering(781) 00:09:35.367 fused_ordering(782) 00:09:35.367 fused_ordering(783) 00:09:35.367 fused_ordering(784) 00:09:35.367 fused_ordering(785) 00:09:35.367 fused_ordering(786) 00:09:35.367 fused_ordering(787) 00:09:35.367 fused_ordering(788) 00:09:35.367 fused_ordering(789) 00:09:35.367 fused_ordering(790) 00:09:35.367 fused_ordering(791) 00:09:35.367 fused_ordering(792) 00:09:35.367 fused_ordering(793) 00:09:35.367 fused_ordering(794) 00:09:35.367 fused_ordering(795) 00:09:35.367 fused_ordering(796) 00:09:35.367 fused_ordering(797) 00:09:35.367 fused_ordering(798) 00:09:35.367 fused_ordering(799) 00:09:35.367 fused_ordering(800) 00:09:35.367 fused_ordering(801) 00:09:35.367 fused_ordering(802) 00:09:35.367 fused_ordering(803) 00:09:35.367 fused_ordering(804) 00:09:35.367 fused_ordering(805) 00:09:35.367 fused_ordering(806) 00:09:35.367 fused_ordering(807) 00:09:35.367 fused_ordering(808) 00:09:35.367 fused_ordering(809) 00:09:35.367 fused_ordering(810) 00:09:35.367 fused_ordering(811) 00:09:35.367 fused_ordering(812) 00:09:35.367 fused_ordering(813) 00:09:35.367 fused_ordering(814) 00:09:35.367 fused_ordering(815) 00:09:35.367 fused_ordering(816) 00:09:35.367 fused_ordering(817) 00:09:35.367 fused_ordering(818) 00:09:35.367 fused_ordering(819) 00:09:35.367 fused_ordering(820) 00:09:35.624 fused_ordering(821) 00:09:35.624 fused_ordering(822) 00:09:35.624 fused_ordering(823) 00:09:35.624 fused_ordering(824) 00:09:35.624 fused_ordering(825) 00:09:35.624 fused_ordering(826) 00:09:35.624 fused_ordering(827) 00:09:35.624 fused_ordering(828) 00:09:35.624 fused_ordering(829) 00:09:35.624 fused_ordering(830) 00:09:35.624 fused_ordering(831) 00:09:35.624 fused_ordering(832) 00:09:35.624 fused_ordering(833) 00:09:35.624 fused_ordering(834) 00:09:35.624 fused_ordering(835) 00:09:35.624 fused_ordering(836) 00:09:35.624 fused_ordering(837) 00:09:35.624 fused_ordering(838) 00:09:35.624 fused_ordering(839) 00:09:35.624 fused_ordering(840) 00:09:35.624 fused_ordering(841) 00:09:35.624 fused_ordering(842) 00:09:35.624 fused_ordering(843) 00:09:35.624 fused_ordering(844) 00:09:35.624 fused_ordering(845) 00:09:35.624 fused_ordering(846) 00:09:35.624 fused_ordering(847) 00:09:35.625 fused_ordering(848) 00:09:35.625 fused_ordering(849) 00:09:35.625 fused_ordering(850) 00:09:35.625 fused_ordering(851) 00:09:35.625 fused_ordering(852) 00:09:35.625 fused_ordering(853) 00:09:35.625 fused_ordering(854) 00:09:35.625 fused_ordering(855) 00:09:35.625 fused_ordering(856) 00:09:35.625 fused_ordering(857) 00:09:35.625 fused_ordering(858) 00:09:35.625 fused_ordering(859) 00:09:35.625 fused_ordering(860) 00:09:35.625 fused_ordering(861) 00:09:35.625 fused_ordering(862) 00:09:35.625 fused_ordering(863) 00:09:35.625 fused_ordering(864) 00:09:35.625 fused_ordering(865) 00:09:35.625 fused_ordering(866) 00:09:35.625 fused_ordering(867) 00:09:35.625 fused_ordering(868) 00:09:35.625 fused_ordering(869) 00:09:35.625 fused_ordering(870) 00:09:35.625 fused_ordering(871) 00:09:35.625 fused_ordering(872) 00:09:35.625 fused_ordering(873) 00:09:35.625 fused_ordering(874) 00:09:35.625 fused_ordering(875) 00:09:35.625 fused_ordering(876) 00:09:35.625 fused_ordering(877) 00:09:35.625 fused_ordering(878) 00:09:35.625 fused_ordering(879) 00:09:35.625 fused_ordering(880) 00:09:35.625 fused_ordering(881) 00:09:35.625 fused_ordering(882) 00:09:35.625 fused_ordering(883) 00:09:35.625 fused_ordering(884) 00:09:35.625 fused_ordering(885) 00:09:35.625 fused_ordering(886) 00:09:35.625 fused_ordering(887) 00:09:35.625 fused_ordering(888) 00:09:35.625 fused_ordering(889) 00:09:35.625 fused_ordering(890) 00:09:35.625 fused_ordering(891) 00:09:35.625 fused_ordering(892) 00:09:35.625 fused_ordering(893) 00:09:35.625 fused_ordering(894) 00:09:35.625 fused_ordering(895) 00:09:35.625 fused_ordering(896) 00:09:35.625 fused_ordering(897) 00:09:35.625 fused_ordering(898) 00:09:35.625 fused_ordering(899) 00:09:35.625 fused_ordering(900) 00:09:35.625 fused_ordering(901) 00:09:35.625 fused_ordering(902) 00:09:35.625 fused_ordering(903) 00:09:35.625 fused_ordering(904) 00:09:35.625 fused_ordering(905) 00:09:35.625 fused_ordering(906) 00:09:35.625 fused_ordering(907) 00:09:35.625 fused_ordering(908) 00:09:35.625 fused_ordering(909) 00:09:35.625 fused_ordering(910) 00:09:35.625 fused_ordering(911) 00:09:35.625 fused_ordering(912) 00:09:35.625 fused_ordering(913) 00:09:35.625 fused_ordering(914) 00:09:35.625 fused_ordering(915) 00:09:35.625 fused_ordering(916) 00:09:35.625 fused_ordering(917) 00:09:35.625 fused_ordering(918) 00:09:35.625 fused_ordering(919) 00:09:35.625 fused_ordering(920) 00:09:35.625 fused_ordering(921) 00:09:35.625 fused_ordering(922) 00:09:35.625 fused_ordering(923) 00:09:35.625 fused_ordering(924) 00:09:35.625 fused_ordering(925) 00:09:35.625 fused_ordering(926) 00:09:35.625 fused_ordering(927) 00:09:35.625 fused_ordering(928) 00:09:35.625 fused_ordering(929) 00:09:35.625 fused_ordering(930) 00:09:35.625 fused_ordering(931) 00:09:35.625 fused_ordering(932) 00:09:35.625 fused_ordering(933) 00:09:35.625 fused_ordering(934) 00:09:35.625 fused_ordering(935) 00:09:35.625 fused_ordering(936) 00:09:35.625 fused_ordering(937) 00:09:35.625 fused_ordering(938) 00:09:35.625 fused_ordering(939) 00:09:35.625 fused_ordering(940) 00:09:35.625 fused_ordering(941) 00:09:35.625 fused_ordering(942) 00:09:35.625 fused_ordering(943) 00:09:35.625 fused_ordering(944) 00:09:35.625 fused_ordering(945) 00:09:35.625 fused_ordering(946) 00:09:35.625 fused_ordering(947) 00:09:35.625 fused_ordering(948) 00:09:35.625 fused_ordering(949) 00:09:35.625 fused_ordering(950) 00:09:35.625 fused_ordering(951) 00:09:35.625 fused_ordering(952) 00:09:35.625 fused_ordering(953) 00:09:35.625 fused_ordering(954) 00:09:35.625 fused_ordering(955) 00:09:35.625 fused_ordering(956) 00:09:35.625 fused_ordering(957) 00:09:35.625 fused_ordering(958) 00:09:35.625 fused_ordering(959) 00:09:35.625 fused_ordering(960) 00:09:35.625 fused_ordering(961) 00:09:35.625 fused_ordering(962) 00:09:35.625 fused_ordering(963) 00:09:35.625 fused_ordering(964) 00:09:35.625 fused_ordering(965) 00:09:35.625 fused_ordering(966) 00:09:35.625 fused_ordering(967) 00:09:35.625 fused_ordering(968) 00:09:35.625 fused_ordering(969) 00:09:35.625 fused_ordering(970) 00:09:35.625 fused_ordering(971) 00:09:35.625 fused_ordering(972) 00:09:35.625 fused_ordering(973) 00:09:35.625 fused_ordering(974) 00:09:35.625 fused_ordering(975) 00:09:35.625 fused_ordering(976) 00:09:35.625 fused_ordering(977) 00:09:35.625 fused_ordering(978) 00:09:35.625 fused_ordering(979) 00:09:35.625 fused_ordering(980) 00:09:35.625 fused_ordering(981) 00:09:35.625 fused_ordering(982) 00:09:35.625 fused_ordering(983) 00:09:35.625 fused_ordering(984) 00:09:35.625 fused_ordering(985) 00:09:35.625 fused_ordering(986) 00:09:35.625 fused_ordering(987) 00:09:35.625 fused_ordering(988) 00:09:35.625 fused_ordering(989) 00:09:35.625 fused_ordering(990) 00:09:35.625 fused_ordering(991) 00:09:35.625 fused_ordering(992) 00:09:35.625 fused_ordering(993) 00:09:35.625 fused_ordering(994) 00:09:35.625 fused_ordering(995) 00:09:35.625 fused_ordering(996) 00:09:35.625 fused_ordering(997) 00:09:35.625 fused_ordering(998) 00:09:35.625 fused_ordering(999) 00:09:35.625 fused_ordering(1000) 00:09:35.625 fused_ordering(1001) 00:09:35.625 fused_ordering(1002) 00:09:35.625 fused_ordering(1003) 00:09:35.625 fused_ordering(1004) 00:09:35.625 fused_ordering(1005) 00:09:35.625 fused_ordering(1006) 00:09:35.625 fused_ordering(1007) 00:09:35.625 fused_ordering(1008) 00:09:35.625 fused_ordering(1009) 00:09:35.625 fused_ordering(1010) 00:09:35.625 fused_ordering(1011) 00:09:35.625 fused_ordering(1012) 00:09:35.625 fused_ordering(1013) 00:09:35.625 fused_ordering(1014) 00:09:35.625 fused_ordering(1015) 00:09:35.625 fused_ordering(1016) 00:09:35.625 fused_ordering(1017) 00:09:35.625 fused_ordering(1018) 00:09:35.625 fused_ordering(1019) 00:09:35.625 fused_ordering(1020) 00:09:35.625 fused_ordering(1021) 00:09:35.625 fused_ordering(1022) 00:09:35.625 fused_ordering(1023) 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:35.625 rmmod nvme_rdma 00:09:35.625 rmmod nvme_fabrics 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 469756 ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 469756 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 469756 ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 469756 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 469756 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 469756' 00:09:35.625 killing process with pid 469756 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 469756 00:09:35.625 [2024-05-14 23:56:04.962562] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:35.625 23:56:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 469756 00:09:35.882 [2024-05-14 23:56:05.012419] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:36.203 23:56:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:36.203 00:09:36.203 real 0m4.333s 00:09:36.203 user 0m3.548s 00:09:36.203 sys 0m2.060s 00:09:36.203 23:56:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.203 23:56:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:36.203 ************************************ 00:09:36.203 END TEST nvmf_fused_ordering 00:09:36.203 ************************************ 00:09:36.203 23:56:05 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:36.203 23:56:05 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:36.203 23:56:05 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.203 23:56:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:36.203 ************************************ 00:09:36.203 START TEST nvmf_delete_subsystem 00:09:36.203 ************************************ 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:36.203 * Looking for test storage... 00:09:36.203 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.203 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.204 23:56:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.731 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:38.732 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:38.732 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:38.732 Found net devices under 0000:09:00.0: mlx_0_0 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:38.732 Found net devices under 0000:09:00.1: mlx_0_1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:38.732 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.732 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:38.732 altname enp9s0f0np0 00:09:38.732 inet 192.168.100.8/24 scope global mlx_0_0 00:09:38.732 valid_lft forever preferred_lft forever 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:38.732 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:38.733 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.733 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:38.733 altname enp9s0f1np1 00:09:38.733 inet 192.168.100.9/24 scope global mlx_0_1 00:09:38.733 valid_lft forever preferred_lft forever 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:38.733 192.168.100.9' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:38.733 192.168.100.9' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:38.733 192.168.100.9' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=471938 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 471938 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 471938 ']' 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.733 23:56:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.733 [2024-05-14 23:56:07.837340] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:09:38.733 [2024-05-14 23:56:07.837440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.733 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.733 [2024-05-14 23:56:07.911665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.733 [2024-05-14 23:56:08.027585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.733 [2024-05-14 23:56:08.027644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.733 [2024-05-14 23:56:08.027661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.733 [2024-05-14 23:56:08.027675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.733 [2024-05-14 23:56:08.027687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.733 [2024-05-14 23:56:08.027775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.733 [2024-05-14 23:56:08.027781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 [2024-05-14 23:56:08.836890] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19ee3d0/0x19f28c0) succeed. 00:09:39.686 [2024-05-14 23:56:08.848636] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19ef8d0/0x1a33f50) succeed. 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 [2024-05-14 23:56:08.950179] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:39.686 [2024-05-14 23:56:08.950508] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 NULL1 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 Delay0 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=472137 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:39.686 23:56:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:39.686 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.948 [2024-05-14 23:56:09.049171] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:41.844 23:56:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.844 23:56:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.844 23:56:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.775 NVMe io qpair process completion error 00:09:42.775 NVMe io qpair process completion error 00:09:43.032 NVMe io qpair process completion error 00:09:43.032 NVMe io qpair process completion error 00:09:43.032 NVMe io qpair process completion error 00:09:43.032 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.032 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:43.032 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 472137 00:09:43.032 NVMe io qpair process completion error 00:09:43.032 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:43.289 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:43.289 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 472137 00:09:43.289 23:56:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Read completed with error (sct=0, sc=8) 00:09:43.855 starting I/O failed: -6 00:09:43.855 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Write completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 starting I/O failed: -6 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.856 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 Write completed with error (sct=0, sc=8) 00:09:43.857 Read completed with error (sct=0, sc=8) 00:09:43.857 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:43.857 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 472137 00:09:43.857 Initializing NVMe Controllers 00:09:43.857 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.857 Controller IO queue size 128, less than required. 00:09:43.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:43.857 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:43.857 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:43.857 Initialization complete. Launching workers. 00:09:43.857 ======================================================== 00:09:43.857 Latency(us) 00:09:43.857 Device Information : IOPS MiB/s Average min max 00:09:43.857 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 72.50 0.04 1767015.11 1000074.94 2973742.55 00:09:43.857 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 88.61 0.04 1453189.26 1000568.91 2973533.81 00:09:43.857 ======================================================== 00:09:43.857 Total : 161.11 0.08 1594410.90 1000074.94 2973742.55 00:09:43.857 00:09:43.857 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:43.857 [2024-05-14 23:56:13.137533] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:09:43.857 [2024-05-14 23:56:13.154092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:43.857 [2024-05-14 23:56:13.154120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:43.857 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 472137 00:09:44.423 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (472137) - No such process 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 472137 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 472137 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 472137 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 [2024-05-14 23:56:13.658498] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=472676 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:44.423 23:56:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.423 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.423 [2024-05-14 23:56:13.742592] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:44.988 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.988 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:44.988 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.552 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.552 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:45.552 23:56:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:46.116 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.116 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:46.116 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:46.372 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.372 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:46.372 23:56:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:46.936 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.936 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:46.936 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.500 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.500 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:47.500 23:56:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.065 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.065 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:48.065 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.629 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.629 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:48.629 23:56:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.886 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.886 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:48.886 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.452 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.452 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:49.452 23:56:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:50.017 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:50.017 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:50.017 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:50.581 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:50.581 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:50.581 23:56:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.175 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.175 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:51.175 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.433 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.433 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:51.433 23:56:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.691 Initializing NVMe Controllers 00:09:51.691 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:51.691 Controller IO queue size 128, less than required. 00:09:51.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:51.691 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:51.691 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:51.691 Initialization complete. Launching workers. 00:09:51.691 ======================================================== 00:09:51.691 Latency(us) 00:09:51.691 Device Information : IOPS MiB/s Average min max 00:09:51.691 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003280.94 1000576.66 1006718.48 00:09:51.691 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001625.29 1000082.40 1004884.16 00:09:51.691 ======================================================== 00:09:51.691 Total : 256.00 0.12 1002453.11 1000082.40 1006718.48 00:09:51.691 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 472676 00:09:51.947 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (472676) - No such process 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 472676 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:51.947 rmmod nvme_rdma 00:09:51.947 rmmod nvme_fabrics 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 471938 ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 471938 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 471938 ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 471938 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471938 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471938' 00:09:51.947 killing process with pid 471938 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 471938 00:09:51.947 [2024-05-14 23:56:21.290898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:51.947 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 471938 00:09:52.205 [2024-05-14 23:56:21.352428] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:52.463 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.463 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:52.463 00:09:52.463 real 0m16.287s 00:09:52.463 user 0m49.357s 00:09:52.463 sys 0m2.791s 00:09:52.463 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.463 23:56:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 ************************************ 00:09:52.463 END TEST nvmf_delete_subsystem 00:09:52.463 ************************************ 00:09:52.463 23:56:21 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:09:52.463 23:56:21 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:52.463 23:56:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.463 23:56:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:52.463 ************************************ 00:09:52.463 START TEST nvmf_ns_masking 00:09:52.463 ************************************ 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:09:52.463 * Looking for test storage... 00:09:52.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.463 23:56:21 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=e74fc6fb-f69a-42d7-8ad6-194dae43976f 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.464 23:56:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:55.022 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:55.022 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:55.022 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:55.023 Found net devices under 0000:09:00.0: mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:55.023 Found net devices under 0000:09:00.1: mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:55.023 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.023 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:55.023 altname enp9s0f0np0 00:09:55.023 inet 192.168.100.8/24 scope global mlx_0_0 00:09:55.023 valid_lft forever preferred_lft forever 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:55.023 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.023 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:55.023 altname enp9s0f1np1 00:09:55.023 inet 192.168.100.9/24 scope global mlx_0_1 00:09:55.023 valid_lft forever preferred_lft forever 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:55.023 192.168.100.9' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:55.023 192.168.100.9' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:55.023 192.168.100.9' 00:09:55.023 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=475578 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 475578 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 475578 ']' 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:55.024 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:55.024 [2024-05-14 23:56:24.313962] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:09:55.024 [2024-05-14 23:56:24.314041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.024 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.282 [2024-05-14 23:56:24.384306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.282 [2024-05-14 23:56:24.497064] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.282 [2024-05-14 23:56:24.497114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.282 [2024-05-14 23:56:24.497143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.282 [2024-05-14 23:56:24.497156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.282 [2024-05-14 23:56:24.497166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.282 [2024-05-14 23:56:24.497270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.282 [2024-05-14 23:56:24.497330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.282 [2024-05-14 23:56:24.497358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.282 [2024-05-14 23:56:24.497361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.282 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:55.282 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:55.282 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.282 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.282 23:56:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:55.539 23:56:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.539 23:56:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:55.796 [2024-05-14 23:56:24.901788] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf75a20/0xf79f10) succeed. 00:09:55.796 [2024-05-14 23:56:24.912385] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf77060/0xfbb5a0) succeed. 00:09:55.796 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:55.796 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:55.796 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:56.052 Malloc1 00:09:56.052 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:56.310 Malloc2 00:09:56.310 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.566 23:56:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:56.823 23:56:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:57.079 [2024-05-14 23:56:26.313141] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:57.079 [2024-05-14 23:56:26.313477] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:57.079 23:56:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:57.079 23:56:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e74fc6fb-f69a-42d7-8ad6-194dae43976f -a 192.168.100.8 -s 4420 -i 4 00:09:58.011 23:56:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.011 23:56:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:58.011 23:56:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.011 23:56:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:58.011 23:56:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:00.533 [ 0]:0x1 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f813d5165c884ae89ce3e725ffb24b15 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f813d5165c884ae89ce3e725ffb24b15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:00.533 [ 0]:0x1 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f813d5165c884ae89ce3e725ffb24b15 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f813d5165c884ae89ce3e725ffb24b15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:00.533 [ 1]:0x2 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:10:00.533 23:56:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.097 23:56:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.354 23:56:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:01.918 23:56:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:10:01.918 23:56:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e74fc6fb-f69a-42d7-8ad6-194dae43976f -a 192.168.100.8 -s 4420 -i 4 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:10:02.848 23:56:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:04.761 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:05.018 [ 0]:0x2 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.018 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:05.275 [ 0]:0x1 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f813d5165c884ae89ce3e725ffb24b15 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f813d5165c884ae89ce3e725ffb24b15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:05.275 [ 1]:0x2 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.275 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:05.532 [ 0]:0x2 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:10:05.532 23:56:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.462 23:56:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:06.462 23:56:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:10:06.462 23:56:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e74fc6fb-f69a-42d7-8ad6-194dae43976f -a 192.168.100.8 -s 4420 -i 4 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:07.832 23:56:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:09.727 [ 0]:0x1 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f813d5165c884ae89ce3e725ffb24b15 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f813d5165c884ae89ce3e725ffb24b15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:09.727 [ 1]:0x2 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:09.727 23:56:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:10.011 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:10.012 [ 0]:0x2 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:10.012 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:10.270 [2024-05-14 23:56:39.573153] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:10.270 request: 00:10:10.270 { 00:10:10.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.270 "nsid": 2, 00:10:10.270 "host": "nqn.2016-06.io.spdk:host1", 00:10:10.270 "method": "nvmf_ns_remove_host", 00:10:10.270 "req_id": 1 00:10:10.270 } 00:10:10.270 Got JSON-RPC error response 00:10:10.270 response: 00:10:10.270 { 00:10:10.270 "code": -32602, 00:10:10.270 "message": "Invalid parameters" 00:10:10.270 } 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:10.270 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:10.527 [ 0]:0x2 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=26504c3e615a47b1ad90701e100ffa95 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 26504c3e615a47b1ad90701e100ffa95 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:10:10.527 23:56:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.091 23:56:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:11.656 rmmod nvme_rdma 00:10:11.656 rmmod nvme_fabrics 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 475578 ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 475578 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 475578 ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 475578 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 475578 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 475578' 00:10:11.656 killing process with pid 475578 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 475578 00:10:11.656 [2024-05-14 23:56:40.784047] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:11.656 23:56:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 475578 00:10:11.656 [2024-05-14 23:56:40.866980] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:11.914 23:56:41 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.914 23:56:41 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:11.914 00:10:11.914 real 0m19.496s 00:10:11.914 user 1m10.826s 00:10:11.914 sys 0m3.081s 00:10:11.914 23:56:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:11.914 23:56:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:11.914 ************************************ 00:10:11.914 END TEST nvmf_ns_masking 00:10:11.914 ************************************ 00:10:11.914 23:56:41 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:11.914 23:56:41 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:11.914 23:56:41 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:11.914 23:56:41 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:11.914 23:56:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:11.914 ************************************ 00:10:11.914 START TEST nvmf_nvme_cli 00:10:11.914 ************************************ 00:10:11.914 23:56:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:12.173 * Looking for test storage... 00:10:12.173 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.173 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.174 23:56:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:14.705 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:14.705 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:14.705 Found net devices under 0000:09:00.0: mlx_0_0 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:14.705 Found net devices under 0000:09:00.1: mlx_0_1 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:14.705 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:14.706 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.706 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:14.706 altname enp9s0f0np0 00:10:14.706 inet 192.168.100.8/24 scope global mlx_0_0 00:10:14.706 valid_lft forever preferred_lft forever 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:14.706 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.706 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:14.706 altname enp9s0f1np1 00:10:14.706 inet 192.168.100.9/24 scope global mlx_0_1 00:10:14.706 valid_lft forever preferred_lft forever 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:14.706 192.168.100.9' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:14.706 192.168.100.9' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:14.706 192.168.100.9' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=479778 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 479778 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 479778 ']' 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:14.706 23:56:43 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:14.706 [2024-05-14 23:56:44.002070] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:14.706 [2024-05-14 23:56:44.002150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.706 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.964 [2024-05-14 23:56:44.073074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.964 [2024-05-14 23:56:44.185104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.964 [2024-05-14 23:56:44.185161] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.964 [2024-05-14 23:56:44.185189] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.964 [2024-05-14 23:56:44.185200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.964 [2024-05-14 23:56:44.185210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.964 [2024-05-14 23:56:44.185284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.964 [2024-05-14 23:56:44.185344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.964 [2024-05-14 23:56:44.185371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.964 [2024-05-14 23:56:44.185373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.964 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.964 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:10:14.964 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.964 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.964 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 [2024-05-14 23:56:44.357098] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x63da20/0x641f10) succeed. 00:10:15.221 [2024-05-14 23:56:44.368007] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x63f060/0x6835a0) succeed. 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 Malloc0 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 Malloc1 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.221 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.478 [2024-05-14 23:56:44.597521] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:15.478 [2024-05-14 23:56:44.597875] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 4420 00:10:15.478 00:10:15.478 Discovery Log Number of Records 2, Generation counter 2 00:10:15.478 =====Discovery Log Entry 0====== 00:10:15.478 trtype: rdma 00:10:15.478 adrfam: ipv4 00:10:15.478 subtype: current discovery subsystem 00:10:15.478 treq: not required 00:10:15.478 portid: 0 00:10:15.478 trsvcid: 4420 00:10:15.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:15.478 traddr: 192.168.100.8 00:10:15.478 eflags: explicit discovery connections, duplicate discovery information 00:10:15.478 rdma_prtype: not specified 00:10:15.478 rdma_qptype: connected 00:10:15.478 rdma_cms: rdma-cm 00:10:15.478 rdma_pkey: 0x0000 00:10:15.478 =====Discovery Log Entry 1====== 00:10:15.478 trtype: rdma 00:10:15.478 adrfam: ipv4 00:10:15.478 subtype: nvme subsystem 00:10:15.478 treq: not required 00:10:15.478 portid: 0 00:10:15.478 trsvcid: 4420 00:10:15.478 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:15.478 traddr: 192.168.100.8 00:10:15.478 eflags: none 00:10:15.478 rdma_prtype: not specified 00:10:15.478 rdma_qptype: connected 00:10:15.478 rdma_cms: rdma-cm 00:10:15.478 rdma_pkey: 0x0000 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:15.478 23:56:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:19.653 23:56:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:21.551 /dev/nvme0n1 ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:21.551 23:56:50 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:23.458 rmmod nvme_rdma 00:10:23.458 rmmod nvme_fabrics 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 479778 ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 479778 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 479778 ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 479778 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 479778 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 479778' 00:10:23.458 killing process with pid 479778 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 479778 00:10:23.458 [2024-05-14 23:56:52.762826] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:23.458 23:56:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 479778 00:10:23.716 [2024-05-14 23:56:52.852946] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:23.974 23:56:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.974 23:56:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:23.974 00:10:23.974 real 0m11.943s 00:10:23.974 user 0m36.429s 00:10:23.974 sys 0m2.373s 00:10:23.974 23:56:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.974 23:56:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:23.974 ************************************ 00:10:23.974 END TEST nvmf_nvme_cli 00:10:23.974 ************************************ 00:10:23.974 23:56:53 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:23.974 23:56:53 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:23.974 23:56:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:23.974 23:56:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.974 23:56:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:23.974 ************************************ 00:10:23.974 START TEST nvmf_host_management 00:10:23.974 ************************************ 00:10:23.974 23:56:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:23.974 * Looking for test storage... 00:10:23.974 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.974 23:56:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.974 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.975 23:56:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:26.502 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:26.502 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:26.503 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:26.503 Found net devices under 0000:09:00.0: mlx_0_0 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:26.503 Found net devices under 0000:09:00.1: mlx_0_1 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:26.503 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:26.761 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:26.761 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:26.761 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:26.761 altname enp9s0f0np0 00:10:26.761 inet 192.168.100.8/24 scope global mlx_0_0 00:10:26.761 valid_lft forever preferred_lft forever 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:26.762 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:26.762 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:26.762 altname enp9s0f1np1 00:10:26.762 inet 192.168.100.9/24 scope global mlx_0_1 00:10:26.762 valid_lft forever preferred_lft forever 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:26.762 192.168.100.9' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:26.762 192.168.100.9' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:26.762 192.168.100.9' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=482918 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 482918 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 482918 ']' 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:26.762 23:56:55 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.762 [2024-05-14 23:56:55.976947] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:26.762 [2024-05-14 23:56:55.977035] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.762 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.762 [2024-05-14 23:56:56.049797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.020 [2024-05-14 23:56:56.170146] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.020 [2024-05-14 23:56:56.170205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.020 [2024-05-14 23:56:56.170221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.020 [2024-05-14 23:56:56.170240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.020 [2024-05-14 23:56:56.170252] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.020 [2024-05-14 23:56:56.170316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.020 [2024-05-14 23:56:56.170360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.020 [2024-05-14 23:56:56.170442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:27.020 [2024-05-14 23:56:56.170445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.020 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.020 [2024-05-14 23:56:56.351782] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1220d10/0x1225200) succeed. 00:10:27.020 [2024-05-14 23:56:56.362506] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1222350/0x1266890) succeed. 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.278 Malloc0 00:10:27.278 [2024-05-14 23:56:56.565556] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:27.278 [2024-05-14 23:56:56.565892] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=483077 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 483077 /var/tmp/bdevperf.sock 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 483077 ']' 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.278 { 00:10:27.278 "params": { 00:10:27.278 "name": "Nvme$subsystem", 00:10:27.278 "trtype": "$TEST_TRANSPORT", 00:10:27.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.278 "adrfam": "ipv4", 00:10:27.278 "trsvcid": "$NVMF_PORT", 00:10:27.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.278 "hdgst": ${hdgst:-false}, 00:10:27.278 "ddgst": ${ddgst:-false} 00:10:27.278 }, 00:10:27.278 "method": "bdev_nvme_attach_controller" 00:10:27.278 } 00:10:27.278 EOF 00:10:27.278 )") 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:27.278 23:56:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.278 "params": { 00:10:27.278 "name": "Nvme0", 00:10:27.278 "trtype": "rdma", 00:10:27.278 "traddr": "192.168.100.8", 00:10:27.278 "adrfam": "ipv4", 00:10:27.278 "trsvcid": "4420", 00:10:27.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:27.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:27.278 "hdgst": false, 00:10:27.278 "ddgst": false 00:10:27.278 }, 00:10:27.278 "method": "bdev_nvme_attach_controller" 00:10:27.278 }' 00:10:27.536 [2024-05-14 23:56:56.642636] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:27.536 [2024-05-14 23:56:56.642709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483077 ] 00:10:27.536 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.536 [2024-05-14 23:56:56.714320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.536 [2024-05-14 23:56:56.823860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.818 Running I/O for 10 seconds... 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1459 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1459 -ge 100 ']' 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.390 23:56:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:29.322 [2024-05-14 23:56:58.662213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x187600 00:10:29.322 [2024-05-14 23:56:58.662273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.322 [2024-05-14 23:56:58.662301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x187600 00:10:29.322 [2024-05-14 23:56:58.662316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.322 [2024-05-14 23:56:58.662333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x187600 00:10:29.322 [2024-05-14 23:56:58.662347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.322 [2024-05-14 23:56:58.662362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x187600 00:10:29.322 [2024-05-14 23:56:58.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x187600 00:10:29.323 [2024-05-14 23:56:58.662560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x187100 00:10:29.323 [2024-05-14 23:56:58.662589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x187100 00:10:29.323 [2024-05-14 23:56:58.662617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x187100 00:10:29.323 [2024-05-14 23:56:58.662644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x187100 00:10:29.323 [2024-05-14 23:56:58.662672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x187100 00:10:29.323 [2024-05-14 23:56:58.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x187500 00:10:29.323 [2024-05-14 23:56:58.662727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.662983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.662998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210e000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012069000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012048000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.323 [2024-05-14 23:56:58.663472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012027000 len:0x10000 key:0x187400 00:10:29.323 [2024-05-14 23:56:58.663485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012006000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fe5000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fc4000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001233f000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001231e000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122fd000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122bb000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001229a000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012279000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012258000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012237000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.663980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.663996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121d4000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.664169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x187400 00:10:29.324 [2024-05-14 23:56:58.664183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:3b90 p:0 m:0 dnr:0 00:10:29.324 [2024-05-14 23:56:58.665455] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:10:29.324 [2024-05-14 23:56:58.666620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:29.324 task offset: 71680 on job bdev=Nvme0n1 fails 00:10:29.324 00:10:29.324 Latency(us) 00:10:29.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.324 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:29.324 Job: Nvme0n1 ended in about 1.63 seconds with error 00:10:29.324 Verification LBA range: start 0x0 length 0x400 00:10:29.324 Nvme0n1 : 1.63 940.23 58.76 39.18 0.00 64730.62 2706.39 1012846.74 00:10:29.324 =================================================================================================================== 00:10:29.324 Total : 940.23 58.76 39.18 0.00 64730.62 2706.39 1012846.74 00:10:29.324 23:56:58 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 483077 00:10:29.324 23:56:58 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:29.582 { 00:10:29.582 "params": { 00:10:29.582 "name": "Nvme$subsystem", 00:10:29.582 "trtype": "$TEST_TRANSPORT", 00:10:29.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.582 "adrfam": "ipv4", 00:10:29.582 "trsvcid": "$NVMF_PORT", 00:10:29.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.582 "hdgst": ${hdgst:-false}, 00:10:29.582 "ddgst": ${ddgst:-false} 00:10:29.582 }, 00:10:29.582 "method": "bdev_nvme_attach_controller" 00:10:29.582 } 00:10:29.582 EOF 00:10:29.582 )") 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:29.582 23:56:58 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:29.582 "params": { 00:10:29.582 "name": "Nvme0", 00:10:29.582 "trtype": "rdma", 00:10:29.582 "traddr": "192.168.100.8", 00:10:29.582 "adrfam": "ipv4", 00:10:29.582 "trsvcid": "4420", 00:10:29.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:29.582 "hdgst": false, 00:10:29.582 "ddgst": false 00:10:29.582 }, 00:10:29.582 "method": "bdev_nvme_attach_controller" 00:10:29.582 }' 00:10:29.582 [2024-05-14 23:56:58.711316] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:29.582 [2024-05-14 23:56:58.711393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483248 ] 00:10:29.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.582 [2024-05-14 23:56:58.787265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.582 [2024-05-14 23:56:58.897836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.840 Running I/O for 1 seconds... 00:10:30.771 00:10:30.771 Latency(us) 00:10:30.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.771 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:30.771 Verification LBA range: start 0x0 length 0x400 00:10:30.771 Nvme0n1 : 1.00 2550.44 159.40 0.00 0.00 24566.75 1159.02 38059.43 00:10:30.771 =================================================================================================================== 00:10:30.771 Total : 2550.44 159.40 0.00 0.00 24566.75 1159.02 38059.43 00:10:31.336 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 483077 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:31.336 rmmod nvme_rdma 00:10:31.336 rmmod nvme_fabrics 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 482918 ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 482918 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 482918 ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 482918 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 482918 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 482918' 00:10:31.336 killing process with pid 482918 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 482918 00:10:31.336 [2024-05-14 23:57:00.469588] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:31.336 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 482918 00:10:31.336 [2024-05-14 23:57:00.557187] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:31.594 [2024-05-14 23:57:00.829405] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:31.594 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.594 23:57:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:31.594 23:57:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:31.594 00:10:31.594 real 0m7.628s 00:10:31.594 user 0m23.024s 00:10:31.594 sys 0m2.789s 00:10:31.594 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:31.594 23:57:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:31.594 ************************************ 00:10:31.594 END TEST nvmf_host_management 00:10:31.594 ************************************ 00:10:31.594 23:57:00 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:31.594 23:57:00 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:31.594 23:57:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.594 23:57:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:31.594 ************************************ 00:10:31.594 START TEST nvmf_lvol 00:10:31.594 ************************************ 00:10:31.594 23:57:00 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:31.852 * Looking for test storage... 00:10:31.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.852 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.853 23:57:00 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:34.381 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:34.381 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:34.381 Found net devices under 0000:09:00.0: mlx_0_0 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:34.381 Found net devices under 0000:09:00.1: mlx_0_1 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:10:34.381 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:34.382 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.382 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:34.382 altname enp9s0f0np0 00:10:34.382 inet 192.168.100.8/24 scope global mlx_0_0 00:10:34.382 valid_lft forever preferred_lft forever 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:34.382 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.382 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:34.382 altname enp9s0f1np1 00:10:34.382 inet 192.168.100.9/24 scope global mlx_0_1 00:10:34.382 valid_lft forever preferred_lft forever 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.382 192.168.100.9' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:34.382 192.168.100.9' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:34.382 192.168.100.9' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=485697 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 485697 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 485697 ']' 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:34.382 23:57:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:34.382 [2024-05-14 23:57:03.437643] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:34.382 [2024-05-14 23:57:03.437729] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.382 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.382 [2024-05-14 23:57:03.511839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.382 [2024-05-14 23:57:03.627605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.382 [2024-05-14 23:57:03.627667] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.382 [2024-05-14 23:57:03.627683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.382 [2024-05-14 23:57:03.627696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.382 [2024-05-14 23:57:03.627708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.382 [2024-05-14 23:57:03.627789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.382 [2024-05-14 23:57:03.627841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.382 [2024-05-14 23:57:03.627858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.314 23:57:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:35.314 [2024-05-14 23:57:04.658162] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8eee60/0x8f3350) succeed. 00:10:35.573 [2024-05-14 23:57:04.668955] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8f0400/0x9349e0) succeed. 00:10:35.573 23:57:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.832 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:35.832 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.089 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:36.089 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:36.348 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:36.606 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8dcb04a9-fe23-4f6d-8ab1-b792a312c874 00:10:36.606 23:57:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8dcb04a9-fe23-4f6d-8ab1-b792a312c874 lvol 20 00:10:37.170 23:57:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bcef40cf-dddc-4746-aa46-eedff5f2a2d1 00:10:37.170 23:57:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:37.170 23:57:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcef40cf-dddc-4746-aa46-eedff5f2a2d1 00:10:37.735 23:57:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:37.735 [2024-05-14 23:57:07.051527] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:37.735 [2024-05-14 23:57:07.051863] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:37.735 23:57:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:38.299 23:57:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=486290 00:10:38.299 23:57:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:38.299 23:57:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:38.299 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.232 23:57:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bcef40cf-dddc-4746-aa46-eedff5f2a2d1 MY_SNAPSHOT 00:10:39.489 23:57:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f793a5f4-8e72-48a2-b84c-e69b4a2752cb 00:10:39.489 23:57:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bcef40cf-dddc-4746-aa46-eedff5f2a2d1 30 00:10:39.746 23:57:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f793a5f4-8e72-48a2-b84c-e69b4a2752cb MY_CLONE 00:10:40.004 23:57:09 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=45ce49bb-3127-49d0-9a84-869c2d78a4f8 00:10:40.004 23:57:09 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 45ce49bb-3127-49d0-9a84-869c2d78a4f8 00:10:40.261 23:57:09 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 486290 00:10:50.267 Initializing NVMe Controllers 00:10:50.267 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:50.267 Controller IO queue size 128, less than required. 00:10:50.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:50.268 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:50.268 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:50.268 Initialization complete. Launching workers. 00:10:50.268 ======================================================== 00:10:50.268 Latency(us) 00:10:50.268 Device Information : IOPS MiB/s Average min max 00:10:50.268 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14269.60 55.74 8973.18 3071.84 57596.87 00:10:50.268 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14172.60 55.36 9034.09 3340.93 47794.17 00:10:50.268 ======================================================== 00:10:50.268 Total : 28442.20 111.10 9003.53 3071.84 57596.87 00:10:50.268 00:10:50.268 23:57:18 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcef40cf-dddc-4746-aa46-eedff5f2a2d1 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8dcb04a9-fe23-4f6d-8ab1-b792a312c874 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.268 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:50.268 rmmod nvme_rdma 00:10:50.268 rmmod nvme_fabrics 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 485697 ']' 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 485697 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 485697 ']' 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 485697 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 485697 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 485697' 00:10:50.525 killing process with pid 485697 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 485697 00:10:50.525 [2024-05-14 23:57:19.652330] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:50.525 23:57:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 485697 00:10:50.525 [2024-05-14 23:57:19.725511] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:50.782 23:57:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.782 23:57:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:50.782 00:10:50.782 real 0m19.152s 00:10:50.782 user 1m15.860s 00:10:50.782 sys 0m3.040s 00:10:50.782 23:57:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:50.782 23:57:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 ************************************ 00:10:50.782 END TEST nvmf_lvol 00:10:50.782 ************************************ 00:10:50.782 23:57:20 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:50.782 23:57:20 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:50.782 23:57:20 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:50.782 23:57:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:50.782 ************************************ 00:10:50.782 START TEST nvmf_lvs_grow 00:10:50.782 ************************************ 00:10:50.782 23:57:20 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:51.042 * Looking for test storage... 00:10:51.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.042 23:57:20 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:53.575 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:53.575 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:53.575 Found net devices under 0000:09:00.0: mlx_0_0 00:10:53.575 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:53.576 Found net devices under 0000:09:00.1: mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:53.576 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.576 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:53.576 altname enp9s0f0np0 00:10:53.576 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.576 valid_lft forever preferred_lft forever 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:53.576 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.576 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:53.576 altname enp9s0f1np1 00:10:53.576 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.576 valid_lft forever preferred_lft forever 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.576 192.168.100.9' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:53.576 192.168.100.9' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:53.576 192.168.100.9' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=490184 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 490184 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 490184 ']' 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.576 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:53.577 23:57:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.835 [2024-05-14 23:57:22.933560] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:53.835 [2024-05-14 23:57:22.933645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.835 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.835 [2024-05-14 23:57:23.002404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.835 [2024-05-14 23:57:23.113276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.835 [2024-05-14 23:57:23.113332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.835 [2024-05-14 23:57:23.113360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.835 [2024-05-14 23:57:23.113372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.835 [2024-05-14 23:57:23.113382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.835 [2024-05-14 23:57:23.113414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.093 23:57:23 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.351 [2024-05-14 23:57:23.527377] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x666900/0x66adf0) succeed. 00:10:54.351 [2024-05-14 23:57:23.539159] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x667e00/0x6ac480) succeed. 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:54.351 ************************************ 00:10:54.351 START TEST lvs_grow_clean 00:10:54.351 ************************************ 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:54.351 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:54.608 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:54.608 23:57:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:54.866 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:10:54.866 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:10:54.866 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:55.123 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:55.123 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:55.123 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 lvol 150 00:10:55.381 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c41e7e6-681e-4892-b191-917d01daa2fd 00:10:55.381 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:55.381 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:55.639 [2024-05-14 23:57:24.925402] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:55.639 [2024-05-14 23:57:24.925493] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:55.639 true 00:10:55.639 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:10:55.639 23:57:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:55.897 23:57:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:55.897 23:57:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:56.155 23:57:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c41e7e6-681e-4892-b191-917d01daa2fd 00:10:56.413 23:57:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:56.671 [2024-05-14 23:57:25.984570] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:56.671 [2024-05-14 23:57:25.984921] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:56.671 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=490631 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 490631 /var/tmp/bdevperf.sock 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 490631 ']' 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:56.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:56.928 23:57:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:57.186 [2024-05-14 23:57:26.286475] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:10:57.186 [2024-05-14 23:57:26.286558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490631 ] 00:10:57.186 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.186 [2024-05-14 23:57:26.358837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.186 [2024-05-14 23:57:26.474870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.119 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:58.119 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:10:58.119 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:58.376 Nvme0n1 00:10:58.376 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:58.634 [ 00:10:58.634 { 00:10:58.634 "name": "Nvme0n1", 00:10:58.634 "aliases": [ 00:10:58.634 "9c41e7e6-681e-4892-b191-917d01daa2fd" 00:10:58.634 ], 00:10:58.634 "product_name": "NVMe disk", 00:10:58.634 "block_size": 4096, 00:10:58.634 "num_blocks": 38912, 00:10:58.634 "uuid": "9c41e7e6-681e-4892-b191-917d01daa2fd", 00:10:58.634 "assigned_rate_limits": { 00:10:58.634 "rw_ios_per_sec": 0, 00:10:58.634 "rw_mbytes_per_sec": 0, 00:10:58.634 "r_mbytes_per_sec": 0, 00:10:58.634 "w_mbytes_per_sec": 0 00:10:58.634 }, 00:10:58.634 "claimed": false, 00:10:58.634 "zoned": false, 00:10:58.634 "supported_io_types": { 00:10:58.634 "read": true, 00:10:58.634 "write": true, 00:10:58.634 "unmap": true, 00:10:58.634 "write_zeroes": true, 00:10:58.634 "flush": true, 00:10:58.634 "reset": true, 00:10:58.634 "compare": true, 00:10:58.634 "compare_and_write": true, 00:10:58.634 "abort": true, 00:10:58.634 "nvme_admin": true, 00:10:58.634 "nvme_io": true 00:10:58.634 }, 00:10:58.634 "memory_domains": [ 00:10:58.634 { 00:10:58.634 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:58.634 "dma_device_type": 0 00:10:58.634 } 00:10:58.634 ], 00:10:58.634 "driver_specific": { 00:10:58.634 "nvme": [ 00:10:58.634 { 00:10:58.634 "trid": { 00:10:58.634 "trtype": "RDMA", 00:10:58.634 "adrfam": "IPv4", 00:10:58.634 "traddr": "192.168.100.8", 00:10:58.634 "trsvcid": "4420", 00:10:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:58.634 }, 00:10:58.634 "ctrlr_data": { 00:10:58.634 "cntlid": 1, 00:10:58.634 "vendor_id": "0x8086", 00:10:58.634 "model_number": "SPDK bdev Controller", 00:10:58.634 "serial_number": "SPDK0", 00:10:58.634 "firmware_revision": "24.05", 00:10:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:58.634 "oacs": { 00:10:58.634 "security": 0, 00:10:58.634 "format": 0, 00:10:58.634 "firmware": 0, 00:10:58.634 "ns_manage": 0 00:10:58.634 }, 00:10:58.634 "multi_ctrlr": true, 00:10:58.634 "ana_reporting": false 00:10:58.634 }, 00:10:58.634 "vs": { 00:10:58.634 "nvme_version": "1.3" 00:10:58.634 }, 00:10:58.634 "ns_data": { 00:10:58.634 "id": 1, 00:10:58.634 "can_share": true 00:10:58.634 } 00:10:58.634 } 00:10:58.634 ], 00:10:58.634 "mp_policy": "active_passive" 00:10:58.634 } 00:10:58.634 } 00:10:58.634 ] 00:10:58.634 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=490770 00:10:58.634 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:58.634 23:57:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:58.634 Running I/O for 10 seconds... 00:11:00.006 Latency(us) 00:11:00.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.006 Nvme0n1 : 1.00 21159.00 82.65 0.00 0.00 0.00 0.00 0.00 00:11:00.006 =================================================================================================================== 00:11:00.006 Total : 21159.00 82.65 0.00 0.00 0.00 0.00 0.00 00:11:00.006 00:11:00.572 23:57:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:00.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.829 Nvme0n1 : 2.00 21472.00 83.88 0.00 0.00 0.00 0.00 0.00 00:11:00.829 =================================================================================================================== 00:11:00.829 Total : 21472.00 83.88 0.00 0.00 0.00 0.00 0.00 00:11:00.829 00:11:00.829 true 00:11:00.829 23:57:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:00.829 23:57:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:01.087 23:57:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:01.087 23:57:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:01.087 23:57:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 490770 00:11:01.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.652 Nvme0n1 : 3.00 21527.67 84.09 0.00 0.00 0.00 0.00 0.00 00:11:01.652 =================================================================================================================== 00:11:01.652 Total : 21527.67 84.09 0.00 0.00 0.00 0.00 0.00 00:11:01.652 00:11:03.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.029 Nvme0n1 : 4.00 21928.00 85.66 0.00 0.00 0.00 0.00 0.00 00:11:03.029 =================================================================================================================== 00:11:03.029 Total : 21928.00 85.66 0.00 0.00 0.00 0.00 0.00 00:11:03.029 00:11:03.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.990 Nvme0n1 : 5.00 21991.60 85.90 0.00 0.00 0.00 0.00 0.00 00:11:03.990 =================================================================================================================== 00:11:03.990 Total : 21991.60 85.90 0.00 0.00 0.00 0.00 0.00 00:11:03.990 00:11:04.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.923 Nvme0n1 : 6.00 22069.67 86.21 0.00 0.00 0.00 0.00 0.00 00:11:04.923 =================================================================================================================== 00:11:04.923 Total : 22069.67 86.21 0.00 0.00 0.00 0.00 0.00 00:11:04.923 00:11:05.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.856 Nvme0n1 : 7.00 22263.00 86.96 0.00 0.00 0.00 0.00 0.00 00:11:05.856 =================================================================================================================== 00:11:05.856 Total : 22263.00 86.96 0.00 0.00 0.00 0.00 0.00 00:11:05.856 00:11:06.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.788 Nvme0n1 : 8.00 22388.12 87.45 0.00 0.00 0.00 0.00 0.00 00:11:06.788 =================================================================================================================== 00:11:06.788 Total : 22388.12 87.45 0.00 0.00 0.00 0.00 0.00 00:11:06.788 00:11:07.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.719 Nvme0n1 : 9.00 22439.22 87.65 0.00 0.00 0.00 0.00 0.00 00:11:07.719 =================================================================================================================== 00:11:07.719 Total : 22439.22 87.65 0.00 0.00 0.00 0.00 0.00 00:11:07.719 00:11:08.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.652 Nvme0n1 : 10.00 22444.80 87.67 0.00 0.00 0.00 0.00 0.00 00:11:08.652 =================================================================================================================== 00:11:08.652 Total : 22444.80 87.67 0.00 0.00 0.00 0.00 0.00 00:11:08.652 00:11:08.652 00:11:08.652 Latency(us) 00:11:08.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.652 Nvme0n1 : 10.00 22443.85 87.67 0.00 0.00 5697.90 3762.25 15243.19 00:11:08.652 =================================================================================================================== 00:11:08.652 Total : 22443.85 87.67 0.00 0.00 5697.90 3762.25 15243.19 00:11:08.652 0 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 490631 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 490631 ']' 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 490631 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 490631 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 490631' 00:11:08.910 killing process with pid 490631 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 490631 00:11:08.910 Received shutdown signal, test time was about 10.000000 seconds 00:11:08.910 00:11:08.910 Latency(us) 00:11:08.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.910 =================================================================================================================== 00:11:08.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:08.910 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 490631 00:11:09.167 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:09.425 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:09.684 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:09.684 23:57:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:09.942 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:09.942 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:09.942 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:10.200 [2024-05-14 23:57:39.336817] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:10.200 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:10.458 request: 00:11:10.458 { 00:11:10.458 "uuid": "0e8f1d96-3cec-4127-bed5-fb9202ddc3e6", 00:11:10.458 "method": "bdev_lvol_get_lvstores", 00:11:10.458 "req_id": 1 00:11:10.458 } 00:11:10.458 Got JSON-RPC error response 00:11:10.458 response: 00:11:10.458 { 00:11:10.458 "code": -19, 00:11:10.458 "message": "No such device" 00:11:10.458 } 00:11:10.458 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:10.458 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:10.458 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:10.458 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:10.458 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:10.716 aio_bdev 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c41e7e6-681e-4892-b191-917d01daa2fd 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=9c41e7e6-681e-4892-b191-917d01daa2fd 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:10.716 23:57:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:10.974 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c41e7e6-681e-4892-b191-917d01daa2fd -t 2000 00:11:11.231 [ 00:11:11.231 { 00:11:11.231 "name": "9c41e7e6-681e-4892-b191-917d01daa2fd", 00:11:11.231 "aliases": [ 00:11:11.231 "lvs/lvol" 00:11:11.232 ], 00:11:11.232 "product_name": "Logical Volume", 00:11:11.232 "block_size": 4096, 00:11:11.232 "num_blocks": 38912, 00:11:11.232 "uuid": "9c41e7e6-681e-4892-b191-917d01daa2fd", 00:11:11.232 "assigned_rate_limits": { 00:11:11.232 "rw_ios_per_sec": 0, 00:11:11.232 "rw_mbytes_per_sec": 0, 00:11:11.232 "r_mbytes_per_sec": 0, 00:11:11.232 "w_mbytes_per_sec": 0 00:11:11.232 }, 00:11:11.232 "claimed": false, 00:11:11.232 "zoned": false, 00:11:11.232 "supported_io_types": { 00:11:11.232 "read": true, 00:11:11.232 "write": true, 00:11:11.232 "unmap": true, 00:11:11.232 "write_zeroes": true, 00:11:11.232 "flush": false, 00:11:11.232 "reset": true, 00:11:11.232 "compare": false, 00:11:11.232 "compare_and_write": false, 00:11:11.232 "abort": false, 00:11:11.232 "nvme_admin": false, 00:11:11.232 "nvme_io": false 00:11:11.232 }, 00:11:11.232 "driver_specific": { 00:11:11.232 "lvol": { 00:11:11.232 "lvol_store_uuid": "0e8f1d96-3cec-4127-bed5-fb9202ddc3e6", 00:11:11.232 "base_bdev": "aio_bdev", 00:11:11.232 "thin_provision": false, 00:11:11.232 "num_allocated_clusters": 38, 00:11:11.232 "snapshot": false, 00:11:11.232 "clone": false, 00:11:11.232 "esnap_clone": false 00:11:11.232 } 00:11:11.232 } 00:11:11.232 } 00:11:11.232 ] 00:11:11.232 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:11:11.232 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:11.232 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:11.489 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:11.489 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:11.489 23:57:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:11.748 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:11.748 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c41e7e6-681e-4892-b191-917d01daa2fd 00:11:12.007 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e8f1d96-3cec-4127-bed5-fb9202ddc3e6 00:11:12.265 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:12.523 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.523 00:11:12.523 real 0m18.169s 00:11:12.523 user 0m18.408s 00:11:12.523 sys 0m1.326s 00:11:12.523 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.523 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:12.523 ************************************ 00:11:12.523 END TEST lvs_grow_clean 00:11:12.523 ************************************ 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:12.524 ************************************ 00:11:12.524 START TEST lvs_grow_dirty 00:11:12.524 ************************************ 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.524 23:57:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:13.090 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:13.090 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:13.090 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:13.090 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:13.090 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:13.347 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:13.347 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:13.347 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 44542b5c-2371-4124-9a46-c57d1dcd8924 lvol 150 00:11:13.605 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=febd3134-6431-438b-96bd-cceffeae0f9b 00:11:13.605 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:13.605 23:57:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:13.863 [2024-05-14 23:57:43.152187] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:13.863 [2024-05-14 23:57:43.152294] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:13.863 true 00:11:13.863 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:13.863 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:14.121 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:14.121 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:14.378 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 febd3134-6431-438b-96bd-cceffeae0f9b 00:11:14.636 23:57:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:14.894 [2024-05-14 23:57:44.151437] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.894 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=492814 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 492814 /var/tmp/bdevperf.sock 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 492814 ']' 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:15.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:15.152 23:57:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.152 [2024-05-14 23:57:44.491452] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:15.152 [2024-05-14 23:57:44.491524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492814 ] 00:11:15.410 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.411 [2024-05-14 23:57:44.564588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.411 [2024-05-14 23:57:44.679863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.344 23:57:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:16.344 23:57:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:11:16.344 23:57:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:16.602 Nvme0n1 00:11:16.602 23:57:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:16.888 [ 00:11:16.888 { 00:11:16.888 "name": "Nvme0n1", 00:11:16.888 "aliases": [ 00:11:16.888 "febd3134-6431-438b-96bd-cceffeae0f9b" 00:11:16.888 ], 00:11:16.888 "product_name": "NVMe disk", 00:11:16.888 "block_size": 4096, 00:11:16.888 "num_blocks": 38912, 00:11:16.888 "uuid": "febd3134-6431-438b-96bd-cceffeae0f9b", 00:11:16.888 "assigned_rate_limits": { 00:11:16.888 "rw_ios_per_sec": 0, 00:11:16.888 "rw_mbytes_per_sec": 0, 00:11:16.888 "r_mbytes_per_sec": 0, 00:11:16.888 "w_mbytes_per_sec": 0 00:11:16.888 }, 00:11:16.888 "claimed": false, 00:11:16.888 "zoned": false, 00:11:16.888 "supported_io_types": { 00:11:16.888 "read": true, 00:11:16.888 "write": true, 00:11:16.888 "unmap": true, 00:11:16.888 "write_zeroes": true, 00:11:16.888 "flush": true, 00:11:16.888 "reset": true, 00:11:16.888 "compare": true, 00:11:16.888 "compare_and_write": true, 00:11:16.888 "abort": true, 00:11:16.888 "nvme_admin": true, 00:11:16.888 "nvme_io": true 00:11:16.888 }, 00:11:16.888 "memory_domains": [ 00:11:16.888 { 00:11:16.888 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:16.888 "dma_device_type": 0 00:11:16.888 } 00:11:16.888 ], 00:11:16.888 "driver_specific": { 00:11:16.888 "nvme": [ 00:11:16.888 { 00:11:16.888 "trid": { 00:11:16.888 "trtype": "RDMA", 00:11:16.888 "adrfam": "IPv4", 00:11:16.888 "traddr": "192.168.100.8", 00:11:16.888 "trsvcid": "4420", 00:11:16.888 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:16.888 }, 00:11:16.888 "ctrlr_data": { 00:11:16.888 "cntlid": 1, 00:11:16.888 "vendor_id": "0x8086", 00:11:16.888 "model_number": "SPDK bdev Controller", 00:11:16.888 "serial_number": "SPDK0", 00:11:16.888 "firmware_revision": "24.05", 00:11:16.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:16.888 "oacs": { 00:11:16.888 "security": 0, 00:11:16.888 "format": 0, 00:11:16.888 "firmware": 0, 00:11:16.888 "ns_manage": 0 00:11:16.888 }, 00:11:16.888 "multi_ctrlr": true, 00:11:16.888 "ana_reporting": false 00:11:16.888 }, 00:11:16.888 "vs": { 00:11:16.888 "nvme_version": "1.3" 00:11:16.888 }, 00:11:16.888 "ns_data": { 00:11:16.888 "id": 1, 00:11:16.888 "can_share": true 00:11:16.888 } 00:11:16.888 } 00:11:16.888 ], 00:11:16.888 "mp_policy": "active_passive" 00:11:16.888 } 00:11:16.888 } 00:11:16.888 ] 00:11:16.888 23:57:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=492957 00:11:16.888 23:57:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:16.888 23:57:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:16.888 Running I/O for 10 seconds... 00:11:18.267 Latency(us) 00:11:18.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.267 Nvme0n1 : 1.00 21667.00 84.64 0.00 0.00 0.00 0.00 0.00 00:11:18.267 =================================================================================================================== 00:11:18.267 Total : 21667.00 84.64 0.00 0.00 0.00 0.00 0.00 00:11:18.267 00:11:18.833 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:18.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.833 Nvme0n1 : 2.00 21952.00 85.75 0.00 0.00 0.00 0.00 0.00 00:11:18.833 =================================================================================================================== 00:11:18.833 Total : 21952.00 85.75 0.00 0.00 0.00 0.00 0.00 00:11:18.833 00:11:19.091 true 00:11:19.091 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:19.091 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:19.348 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:19.348 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:19.348 23:57:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 492957 00:11:19.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.913 Nvme0n1 : 3.00 22241.00 86.88 0.00 0.00 0.00 0.00 0.00 00:11:19.913 =================================================================================================================== 00:11:19.913 Total : 22241.00 86.88 0.00 0.00 0.00 0.00 0.00 00:11:19.913 00:11:20.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.844 Nvme0n1 : 4.00 22456.25 87.72 0.00 0.00 0.00 0.00 0.00 00:11:20.844 =================================================================================================================== 00:11:20.844 Total : 22456.25 87.72 0.00 0.00 0.00 0.00 0.00 00:11:20.844 00:11:22.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.216 Nvme0n1 : 5.00 22426.20 87.60 0.00 0.00 0.00 0.00 0.00 00:11:22.216 =================================================================================================================== 00:11:22.216 Total : 22426.20 87.60 0.00 0.00 0.00 0.00 0.00 00:11:22.216 00:11:23.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.149 Nvme0n1 : 6.00 22613.33 88.33 0.00 0.00 0.00 0.00 0.00 00:11:23.149 =================================================================================================================== 00:11:23.149 Total : 22613.33 88.33 0.00 0.00 0.00 0.00 0.00 00:11:23.149 00:11:24.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.082 Nvme0n1 : 7.00 22596.86 88.27 0.00 0.00 0.00 0.00 0.00 00:11:24.082 =================================================================================================================== 00:11:24.082 Total : 22596.86 88.27 0.00 0.00 0.00 0.00 0.00 00:11:24.082 00:11:25.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.015 Nvme0n1 : 8.00 22660.00 88.52 0.00 0.00 0.00 0.00 0.00 00:11:25.015 =================================================================================================================== 00:11:25.015 Total : 22660.00 88.52 0.00 0.00 0.00 0.00 0.00 00:11:25.015 00:11:25.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.946 Nvme0n1 : 9.00 22695.00 88.65 0.00 0.00 0.00 0.00 0.00 00:11:25.946 =================================================================================================================== 00:11:25.946 Total : 22695.00 88.65 0.00 0.00 0.00 0.00 0.00 00:11:25.946 00:11:26.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.879 Nvme0n1 : 10.00 22793.40 89.04 0.00 0.00 0.00 0.00 0.00 00:11:26.879 =================================================================================================================== 00:11:26.879 Total : 22793.40 89.04 0.00 0.00 0.00 0.00 0.00 00:11:26.879 00:11:26.879 00:11:26.879 Latency(us) 00:11:26.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.879 Nvme0n1 : 10.00 22792.77 89.03 0.00 0.00 5610.44 3713.71 21262.79 00:11:26.879 =================================================================================================================== 00:11:26.879 Total : 22792.77 89.03 0.00 0.00 5610.44 3713.71 21262.79 00:11:26.879 0 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 492814 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 492814 ']' 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 492814 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:26.879 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 492814 00:11:27.137 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:27.137 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:27.137 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 492814' 00:11:27.137 killing process with pid 492814 00:11:27.137 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 492814 00:11:27.137 Received shutdown signal, test time was about 10.000000 seconds 00:11:27.137 00:11:27.137 Latency(us) 00:11:27.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.137 =================================================================================================================== 00:11:27.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:27.137 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 492814 00:11:27.395 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:27.652 23:57:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:27.909 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:27.909 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 490184 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 490184 00:11:28.167 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 490184 Killed "${NVMF_APP[@]}" "$@" 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=494290 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 494290 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 494290 ']' 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:28.167 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.167 [2024-05-14 23:57:57.372898] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:28.167 [2024-05-14 23:57:57.373002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.167 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.167 [2024-05-14 23:57:57.442876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.425 [2024-05-14 23:57:57.554257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.425 [2024-05-14 23:57:57.554316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.425 [2024-05-14 23:57:57.554328] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.425 [2024-05-14 23:57:57.554352] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.425 [2024-05-14 23:57:57.554362] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.425 [2024-05-14 23:57:57.554391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.425 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:28.682 [2024-05-14 23:57:57.969427] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:28.682 [2024-05-14 23:57:57.969560] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:28.682 [2024-05-14 23:57:57.969620] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev febd3134-6431-438b-96bd-cceffeae0f9b 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=febd3134-6431-438b-96bd-cceffeae0f9b 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:28.682 23:57:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:28.940 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b febd3134-6431-438b-96bd-cceffeae0f9b -t 2000 00:11:29.198 [ 00:11:29.198 { 00:11:29.198 "name": "febd3134-6431-438b-96bd-cceffeae0f9b", 00:11:29.198 "aliases": [ 00:11:29.198 "lvs/lvol" 00:11:29.198 ], 00:11:29.198 "product_name": "Logical Volume", 00:11:29.198 "block_size": 4096, 00:11:29.198 "num_blocks": 38912, 00:11:29.198 "uuid": "febd3134-6431-438b-96bd-cceffeae0f9b", 00:11:29.198 "assigned_rate_limits": { 00:11:29.198 "rw_ios_per_sec": 0, 00:11:29.198 "rw_mbytes_per_sec": 0, 00:11:29.198 "r_mbytes_per_sec": 0, 00:11:29.198 "w_mbytes_per_sec": 0 00:11:29.198 }, 00:11:29.198 "claimed": false, 00:11:29.198 "zoned": false, 00:11:29.198 "supported_io_types": { 00:11:29.198 "read": true, 00:11:29.198 "write": true, 00:11:29.198 "unmap": true, 00:11:29.198 "write_zeroes": true, 00:11:29.198 "flush": false, 00:11:29.198 "reset": true, 00:11:29.198 "compare": false, 00:11:29.198 "compare_and_write": false, 00:11:29.198 "abort": false, 00:11:29.198 "nvme_admin": false, 00:11:29.198 "nvme_io": false 00:11:29.198 }, 00:11:29.198 "driver_specific": { 00:11:29.198 "lvol": { 00:11:29.198 "lvol_store_uuid": "44542b5c-2371-4124-9a46-c57d1dcd8924", 00:11:29.198 "base_bdev": "aio_bdev", 00:11:29.198 "thin_provision": false, 00:11:29.198 "num_allocated_clusters": 38, 00:11:29.198 "snapshot": false, 00:11:29.198 "clone": false, 00:11:29.198 "esnap_clone": false 00:11:29.198 } 00:11:29.198 } 00:11:29.198 } 00:11:29.198 ] 00:11:29.198 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:11:29.198 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:29.198 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:29.456 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:29.456 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:29.456 23:57:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:29.714 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:29.714 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:29.972 [2024-05-14 23:57:59.234393] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:29.972 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:30.230 request: 00:11:30.230 { 00:11:30.230 "uuid": "44542b5c-2371-4124-9a46-c57d1dcd8924", 00:11:30.230 "method": "bdev_lvol_get_lvstores", 00:11:30.230 "req_id": 1 00:11:30.230 } 00:11:30.230 Got JSON-RPC error response 00:11:30.230 response: 00:11:30.230 { 00:11:30.230 "code": -19, 00:11:30.230 "message": "No such device" 00:11:30.230 } 00:11:30.230 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:30.230 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.230 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.230 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.230 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:30.488 aio_bdev 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev febd3134-6431-438b-96bd-cceffeae0f9b 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=febd3134-6431-438b-96bd-cceffeae0f9b 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:30.488 23:57:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:30.746 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b febd3134-6431-438b-96bd-cceffeae0f9b -t 2000 00:11:31.004 [ 00:11:31.004 { 00:11:31.004 "name": "febd3134-6431-438b-96bd-cceffeae0f9b", 00:11:31.004 "aliases": [ 00:11:31.004 "lvs/lvol" 00:11:31.005 ], 00:11:31.005 "product_name": "Logical Volume", 00:11:31.005 "block_size": 4096, 00:11:31.005 "num_blocks": 38912, 00:11:31.005 "uuid": "febd3134-6431-438b-96bd-cceffeae0f9b", 00:11:31.005 "assigned_rate_limits": { 00:11:31.005 "rw_ios_per_sec": 0, 00:11:31.005 "rw_mbytes_per_sec": 0, 00:11:31.005 "r_mbytes_per_sec": 0, 00:11:31.005 "w_mbytes_per_sec": 0 00:11:31.005 }, 00:11:31.005 "claimed": false, 00:11:31.005 "zoned": false, 00:11:31.005 "supported_io_types": { 00:11:31.005 "read": true, 00:11:31.005 "write": true, 00:11:31.005 "unmap": true, 00:11:31.005 "write_zeroes": true, 00:11:31.005 "flush": false, 00:11:31.005 "reset": true, 00:11:31.005 "compare": false, 00:11:31.005 "compare_and_write": false, 00:11:31.005 "abort": false, 00:11:31.005 "nvme_admin": false, 00:11:31.005 "nvme_io": false 00:11:31.005 }, 00:11:31.005 "driver_specific": { 00:11:31.005 "lvol": { 00:11:31.005 "lvol_store_uuid": "44542b5c-2371-4124-9a46-c57d1dcd8924", 00:11:31.005 "base_bdev": "aio_bdev", 00:11:31.005 "thin_provision": false, 00:11:31.005 "num_allocated_clusters": 38, 00:11:31.005 "snapshot": false, 00:11:31.005 "clone": false, 00:11:31.005 "esnap_clone": false 00:11:31.005 } 00:11:31.005 } 00:11:31.005 } 00:11:31.005 ] 00:11:31.005 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:11:31.005 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:31.005 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:31.295 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:31.295 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:31.295 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:31.553 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:31.553 23:58:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete febd3134-6431-438b-96bd-cceffeae0f9b 00:11:31.810 23:58:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44542b5c-2371-4124-9a46-c57d1dcd8924 00:11:32.068 23:58:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:32.325 23:58:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.325 00:11:32.325 real 0m19.793s 00:11:32.325 user 0m51.432s 00:11:32.325 sys 0m3.891s 00:11:32.325 23:58:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:32.325 23:58:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:32.325 ************************************ 00:11:32.325 END TEST lvs_grow_dirty 00:11:32.325 ************************************ 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:32.583 nvmf_trace.0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:32.583 rmmod nvme_rdma 00:11:32.583 rmmod nvme_fabrics 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 494290 ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 494290 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 494290 ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 494290 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 494290 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 494290' 00:11:32.583 killing process with pid 494290 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 494290 00:11:32.583 23:58:01 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 494290 00:11:32.841 23:58:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.841 23:58:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:32.841 00:11:32.841 real 0m41.945s 00:11:32.841 user 1m15.829s 00:11:32.841 sys 0m7.542s 00:11:32.841 23:58:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:32.841 23:58:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 END TEST nvmf_lvs_grow 00:11:32.841 ************************************ 00:11:32.841 23:58:02 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:32.841 23:58:02 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:32.841 23:58:02 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:32.842 23:58:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:32.842 ************************************ 00:11:32.842 START TEST nvmf_bdev_io_wait 00:11:32.842 ************************************ 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:32.842 * Looking for test storage... 00:11:32.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.842 23:58:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:35.370 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:35.370 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:35.370 Found net devices under 0000:09:00.0: mlx_0_0 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:35.370 Found net devices under 0000:09:00.1: mlx_0_1 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:35.370 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:35.371 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.371 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:35.371 altname enp9s0f0np0 00:11:35.371 inet 192.168.100.8/24 scope global mlx_0_0 00:11:35.371 valid_lft forever preferred_lft forever 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:35.371 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.371 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:35.371 altname enp9s0f1np1 00:11:35.371 inet 192.168.100.9/24 scope global mlx_0_1 00:11:35.371 valid_lft forever preferred_lft forever 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:35.371 192.168.100.9' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:35.371 192.168.100.9' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:35.371 192.168.100.9' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=496958 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 496958 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 496958 ']' 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:35.371 23:58:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.629 [2024-05-14 23:58:04.747601] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:35.629 [2024-05-14 23:58:04.747685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.629 [2024-05-14 23:58:04.821715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.629 [2024-05-14 23:58:04.939189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.629 [2024-05-14 23:58:04.939260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.629 [2024-05-14 23:58:04.939276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.629 [2024-05-14 23:58:04.939289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.629 [2024-05-14 23:58:04.939308] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.629 [2024-05-14 23:58:04.939393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.629 [2024-05-14 23:58:04.939461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.629 [2024-05-14 23:58:04.939552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.629 [2024-05-14 23:58:04.939554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.557 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.557 [2024-05-14 23:58:05.828749] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a379d0/0x1a3bec0) succeed. 00:11:36.557 [2024-05-14 23:58:05.839149] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a39010/0x1a7d550) succeed. 00:11:36.815 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.815 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:36.815 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.815 23:58:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.815 Malloc0 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.815 [2024-05-14 23:58:06.045648] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:36.815 [2024-05-14 23:58:06.046020] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=497116 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=497117 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=497119 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=497121 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.815 { 00:11:36.815 "params": { 00:11:36.815 "name": "Nvme$subsystem", 00:11:36.815 "trtype": "$TEST_TRANSPORT", 00:11:36.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.815 "adrfam": "ipv4", 00:11:36.815 "trsvcid": "$NVMF_PORT", 00:11:36.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.815 "hdgst": ${hdgst:-false}, 00:11:36.815 "ddgst": ${ddgst:-false} 00:11:36.815 }, 00:11:36.815 "method": "bdev_nvme_attach_controller" 00:11:36.815 } 00:11:36.815 EOF 00:11:36.815 )") 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:36.815 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.816 { 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme$subsystem", 00:11:36.816 "trtype": "$TEST_TRANSPORT", 00:11:36.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "$NVMF_PORT", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.816 "hdgst": ${hdgst:-false}, 00:11:36.816 "ddgst": ${ddgst:-false} 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 } 00:11:36.816 EOF 00:11:36.816 )") 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.816 { 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme$subsystem", 00:11:36.816 "trtype": "$TEST_TRANSPORT", 00:11:36.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "$NVMF_PORT", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.816 "hdgst": ${hdgst:-false}, 00:11:36.816 "ddgst": ${ddgst:-false} 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 } 00:11:36.816 EOF 00:11:36.816 )") 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.816 { 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme$subsystem", 00:11:36.816 "trtype": "$TEST_TRANSPORT", 00:11:36.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "$NVMF_PORT", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.816 "hdgst": ${hdgst:-false}, 00:11:36.816 "ddgst": ${ddgst:-false} 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 } 00:11:36.816 EOF 00:11:36.816 )") 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 497116 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme1", 00:11:36.816 "trtype": "rdma", 00:11:36.816 "traddr": "192.168.100.8", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "4420", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:36.816 "hdgst": false, 00:11:36.816 "ddgst": false 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 }' 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme1", 00:11:36.816 "trtype": "rdma", 00:11:36.816 "traddr": "192.168.100.8", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "4420", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:36.816 "hdgst": false, 00:11:36.816 "ddgst": false 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 }' 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme1", 00:11:36.816 "trtype": "rdma", 00:11:36.816 "traddr": "192.168.100.8", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "4420", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:36.816 "hdgst": false, 00:11:36.816 "ddgst": false 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 }' 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:36.816 23:58:06 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.816 "params": { 00:11:36.816 "name": "Nvme1", 00:11:36.816 "trtype": "rdma", 00:11:36.816 "traddr": "192.168.100.8", 00:11:36.816 "adrfam": "ipv4", 00:11:36.816 "trsvcid": "4420", 00:11:36.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:36.816 "hdgst": false, 00:11:36.816 "ddgst": false 00:11:36.816 }, 00:11:36.816 "method": "bdev_nvme_attach_controller" 00:11:36.816 }' 00:11:36.816 [2024-05-14 23:58:06.090683] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:36.816 [2024-05-14 23:58:06.090754] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:36.816 [2024-05-14 23:58:06.090860] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:36.816 [2024-05-14 23:58:06.090860] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:36.816 [2024-05-14 23:58:06.090859] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:36.816 [2024-05-14 23:58:06.090954] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-14 23:58:06.090954] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-14 23:58:06.090955] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:36.816 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:36.816 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:36.816 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.073 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.073 [2024-05-14 23:58:06.277173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.073 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.073 [2024-05-14 23:58:06.373390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:37.073 [2024-05-14 23:58:06.375337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.329 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.329 [2024-05-14 23:58:06.474047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:37.329 [2024-05-14 23:58:06.476916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.329 [2024-05-14 23:58:06.550401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.329 [2024-05-14 23:58:06.578130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:37.329 [2024-05-14 23:58:06.644569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:37.586 Running I/O for 1 seconds... 00:11:37.586 Running I/O for 1 seconds... 00:11:37.586 Running I/O for 1 seconds... 00:11:37.586 Running I/O for 1 seconds... 00:11:38.519 00:11:38.519 Latency(us) 00:11:38.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.519 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:38.520 Nvme1n1 : 1.00 202636.32 791.55 0.00 0.00 629.30 254.86 2160.26 00:11:38.520 =================================================================================================================== 00:11:38.520 Total : 202636.32 791.55 0.00 0.00 629.30 254.86 2160.26 00:11:38.520 00:11:38.520 Latency(us) 00:11:38.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.520 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:38.520 Nvme1n1 : 1.01 14993.25 58.57 0.00 0.00 8507.13 5825.42 15534.46 00:11:38.520 =================================================================================================================== 00:11:38.520 Total : 14993.25 58.57 0.00 0.00 8507.13 5825.42 15534.46 00:11:38.520 00:11:38.520 Latency(us) 00:11:38.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.520 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:38.520 Nvme1n1 : 1.01 14136.27 55.22 0.00 0.00 9020.16 6140.97 17185.00 00:11:38.520 =================================================================================================================== 00:11:38.520 Total : 14136.27 55.22 0.00 0.00 9020.16 6140.97 17185.00 00:11:38.520 00:11:38.520 Latency(us) 00:11:38.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.520 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:38.520 Nvme1n1 : 1.00 15902.15 62.12 0.00 0.00 8024.99 4636.07 19418.07 00:11:38.520 =================================================================================================================== 00:11:38.520 Total : 15902.15 62.12 0.00 0.00 8024.99 4636.07 19418.07 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 497117 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 497119 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 497121 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:39.085 rmmod nvme_rdma 00:11:39.085 rmmod nvme_fabrics 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 496958 ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 496958 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 496958 ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 496958 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 496958 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 496958' 00:11:39.085 killing process with pid 496958 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 496958 00:11:39.085 [2024-05-14 23:58:08.222428] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:39.085 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 496958 00:11:39.085 [2024-05-14 23:58:08.305626] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:39.344 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:39.344 00:11:39.344 real 0m6.445s 00:11:39.344 user 0m21.200s 00:11:39.344 sys 0m3.108s 00:11:39.344 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.344 23:58:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.344 ************************************ 00:11:39.344 END TEST nvmf_bdev_io_wait 00:11:39.344 ************************************ 00:11:39.344 23:58:08 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:39.344 23:58:08 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:39.344 23:58:08 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:39.344 23:58:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:39.344 ************************************ 00:11:39.344 START TEST nvmf_queue_depth 00:11:39.344 ************************************ 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:39.344 * Looking for test storage... 00:11:39.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.344 23:58:08 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.880 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:41.881 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:41.881 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:41.881 Found net devices under 0000:09:00.0: mlx_0_0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:41.881 Found net devices under 0000:09:00.1: mlx_0_1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:41.881 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.881 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:41.881 altname enp9s0f0np0 00:11:41.881 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.881 valid_lft forever preferred_lft forever 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:41.881 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.881 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:41.881 altname enp9s0f1np1 00:11:41.881 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.881 valid_lft forever preferred_lft forever 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.881 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.882 192.168.100.9' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:41.882 192.168.100.9' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:41.882 192.168.100.9' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=499481 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 499481 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 499481 ']' 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.882 23:58:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.140 [2024-05-14 23:58:11.264113] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:42.140 [2024-05-14 23:58:11.264204] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.140 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.140 [2024-05-14 23:58:11.334206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.140 [2024-05-14 23:58:11.448995] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.140 [2024-05-14 23:58:11.449050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.140 [2024-05-14 23:58:11.449066] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.140 [2024-05-14 23:58:11.449080] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.140 [2024-05-14 23:58:11.449092] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.140 [2024-05-14 23:58:11.449120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 [2024-05-14 23:58:12.241803] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d74bd0/0x1d790c0) succeed. 00:11:43.072 [2024-05-14 23:58:12.253564] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d760d0/0x1dba750) succeed. 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 Malloc0 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.072 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.073 [2024-05-14 23:58:12.356379] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.073 [2024-05-14 23:58:12.356727] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=499644 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 499644 /var/tmp/bdevperf.sock 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 499644 ']' 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:43.073 23:58:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.073 [2024-05-14 23:58:12.401350] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:11:43.073 [2024-05-14 23:58:12.401421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499644 ] 00:11:43.330 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.330 [2024-05-14 23:58:12.477764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.330 [2024-05-14 23:58:12.593042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.261 NVMe0n1 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.261 23:58:13 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:44.261 Running I/O for 10 seconds... 00:11:56.468 00:11:56.468 Latency(us) 00:11:56.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.468 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:56.468 Verification LBA range: start 0x0 length 0x4000 00:11:56.468 NVMe0n1 : 10.05 13278.87 51.87 0.00 0.00 76792.10 11796.48 48933.55 00:11:56.468 =================================================================================================================== 00:11:56.468 Total : 13278.87 51.87 0.00 0.00 76792.10 11796.48 48933.55 00:11:56.468 0 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 499644 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 499644 ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 499644 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 499644 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 499644' 00:11:56.468 killing process with pid 499644 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 499644 00:11:56.468 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.468 00:11:56.468 Latency(us) 00:11:56.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.468 =================================================================================================================== 00:11:56.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 499644 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:56.468 rmmod nvme_rdma 00:11:56.468 rmmod nvme_fabrics 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 499481 ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 499481 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 499481 ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 499481 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.468 23:58:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 499481 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 499481' 00:11:56.468 killing process with pid 499481 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 499481 00:11:56.468 [2024-05-14 23:58:24.016471] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 499481 00:11:56.468 [2024-05-14 23:58:24.066239] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:56.468 00:11:56.468 real 0m15.761s 00:11:56.468 user 0m25.967s 00:11:56.468 sys 0m2.468s 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.468 23:58:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 ************************************ 00:11:56.468 END TEST nvmf_queue_depth 00:11:56.468 ************************************ 00:11:56.468 23:58:24 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:56.468 23:58:24 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:56.468 23:58:24 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.468 23:58:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 ************************************ 00:11:56.468 START TEST nvmf_target_multipath 00:11:56.468 ************************************ 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:56.468 * Looking for test storage... 00:11:56.468 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.468 23:58:24 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.469 23:58:24 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:57.842 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.842 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:57.843 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:57.843 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:57.843 Found net devices under 0000:09:00.0: mlx_0_0 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:57.843 Found net devices under 0000:09:00.1: mlx_0_1 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.843 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.844 23:58:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:57.844 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.844 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:57.844 altname enp9s0f0np0 00:11:57.844 inet 192.168.100.8/24 scope global mlx_0_0 00:11:57.844 valid_lft forever preferred_lft forever 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:57.844 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.844 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:57.844 altname enp9s0f1np1 00:11:57.844 inet 192.168.100.9/24 scope global mlx_0_1 00:11:57.844 valid_lft forever preferred_lft forever 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:57.844 192.168.100.9' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:57.844 192.168.100.9' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:57.844 192.168.100.9' 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:11:57.844 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:57.845 run this test only with TCP transport for now 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:57.845 rmmod nvme_rdma 00:11:57.845 rmmod nvme_fabrics 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:57.845 00:11:57.845 real 0m2.710s 00:11:57.845 user 0m1.000s 00:11:57.845 sys 0m1.798s 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.845 23:58:27 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:57.845 ************************************ 00:11:57.845 END TEST nvmf_target_multipath 00:11:57.845 ************************************ 00:11:57.845 23:58:27 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:57.845 23:58:27 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:57.845 23:58:27 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.845 23:58:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:58.104 ************************************ 00:11:58.104 START TEST nvmf_zcopy 00:11:58.104 ************************************ 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:58.104 * Looking for test storage... 00:11:58.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.104 23:58:27 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:00.636 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.636 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:00.637 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:00.637 Found net devices under 0000:09:00.0: mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:00.637 Found net devices under 0000:09:00.1: mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:00.637 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.637 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:00.637 altname enp9s0f0np0 00:12:00.637 inet 192.168.100.8/24 scope global mlx_0_0 00:12:00.637 valid_lft forever preferred_lft forever 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:00.637 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.637 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:00.637 altname enp9s0f1np1 00:12:00.637 inet 192.168.100.9/24 scope global mlx_0_1 00:12:00.637 valid_lft forever preferred_lft forever 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:00.637 192.168.100.9' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:00.637 192.168.100.9' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:00.637 192.168.100.9' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:00.637 23:58:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=505014 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 505014 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 505014 ']' 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.638 23:58:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.638 [2024-05-14 23:58:29.933964] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:12:00.638 [2024-05-14 23:58:29.934062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.896 [2024-05-14 23:58:30.009625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.896 [2024-05-14 23:58:30.127070] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.896 [2024-05-14 23:58:30.127147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.896 [2024-05-14 23:58:30.127163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.896 [2024-05-14 23:58:30.127177] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.896 [2024-05-14 23:58:30.127199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.896 [2024-05-14 23:58:30.127229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:12:01.154 Unsupported transport: rdma 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@804 -- # type=--id 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # id=0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:01.154 nvmf_trace.0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # return 0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:01.154 rmmod nvme_rdma 00:12:01.154 rmmod nvme_fabrics 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 505014 ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 505014 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 505014 ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 505014 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 505014 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 505014' 00:12:01.154 killing process with pid 505014 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 505014 00:12:01.154 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 505014 00:12:01.412 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.412 23:58:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:01.412 00:12:01.412 real 0m3.448s 00:12:01.412 user 0m1.845s 00:12:01.412 sys 0m2.097s 00:12:01.412 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:01.412 23:58:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.412 ************************************ 00:12:01.412 END TEST nvmf_zcopy 00:12:01.412 ************************************ 00:12:01.412 23:58:30 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:01.412 23:58:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:01.412 23:58:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.412 23:58:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:01.412 ************************************ 00:12:01.412 START TEST nvmf_nmic 00:12:01.412 ************************************ 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:01.412 * Looking for test storage... 00:12:01.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.412 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.670 23:58:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:04.209 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:04.209 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:04.209 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:04.210 Found net devices under 0000:09:00.0: mlx_0_0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:04.210 Found net devices under 0000:09:00.1: mlx_0_1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:04.210 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.210 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:04.210 altname enp9s0f0np0 00:12:04.210 inet 192.168.100.8/24 scope global mlx_0_0 00:12:04.210 valid_lft forever preferred_lft forever 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:04.210 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.210 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:04.210 altname enp9s0f1np1 00:12:04.210 inet 192.168.100.9/24 scope global mlx_0_1 00:12:04.210 valid_lft forever preferred_lft forever 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:04.210 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:04.211 192.168.100.9' 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:04.211 192.168.100.9' 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:04.211 192.168.100.9' 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:04.211 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=507120 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 507120 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 507120 ']' 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.212 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.212 [2024-05-14 23:58:33.430107] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:12:04.212 [2024-05-14 23:58:33.430184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.212 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.212 [2024-05-14 23:58:33.499706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.477 [2024-05-14 23:58:33.614189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.477 [2024-05-14 23:58:33.614243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.477 [2024-05-14 23:58:33.614257] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.477 [2024-05-14 23:58:33.614269] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.477 [2024-05-14 23:58:33.614278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.477 [2024-05-14 23:58:33.614326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.477 [2024-05-14 23:58:33.614384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.477 [2024-05-14 23:58:33.614449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.477 [2024-05-14 23:58:33.614452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.477 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.477 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:12:04.477 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.477 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.478 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.478 23:58:33 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.478 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:04.478 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.478 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.478 [2024-05-14 23:58:33.802884] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f3a20/0x7f7f10) succeed. 00:12:04.478 [2024-05-14 23:58:33.813839] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7f5060/0x8395a0) succeed. 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 Malloc0 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 [2024-05-14 23:58:34.001303] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:04.735 [2024-05-14 23:58:34.001625] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:04.735 test case1: single bdev can't be used in multiple subsystems 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 [2024-05-14 23:58:34.025416] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:04.735 [2024-05-14 23:58:34.025444] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:04.735 [2024-05-14 23:58:34.025458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.735 request: 00:12:04.735 { 00:12:04.735 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:04.735 "namespace": { 00:12:04.735 "bdev_name": "Malloc0", 00:12:04.735 "no_auto_visible": false 00:12:04.735 }, 00:12:04.735 "method": "nvmf_subsystem_add_ns", 00:12:04.735 "req_id": 1 00:12:04.735 } 00:12:04.735 Got JSON-RPC error response 00:12:04.735 response: 00:12:04.735 { 00:12:04.735 "code": -32602, 00:12:04.735 "message": "Invalid parameters" 00:12:04.735 } 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:04.735 Adding namespace failed - expected result. 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:04.735 test case2: host connect to nvmf target in multiple paths 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:04.735 [2024-05-14 23:58:34.037479] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.735 23:58:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:08.911 23:58:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:12:12.186 23:58:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.186 23:58:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:12:12.186 23:58:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.186 23:58:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:12.186 23:58:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:12:14.108 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:14.108 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:14.108 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.108 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:14.109 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.109 23:58:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:12:14.109 23:58:43 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:14.109 [global] 00:12:14.109 thread=1 00:12:14.109 invalidate=1 00:12:14.109 rw=write 00:12:14.109 time_based=1 00:12:14.109 runtime=1 00:12:14.109 ioengine=libaio 00:12:14.109 direct=1 00:12:14.109 bs=4096 00:12:14.109 iodepth=1 00:12:14.109 norandommap=0 00:12:14.109 numjobs=1 00:12:14.109 00:12:14.109 verify_dump=1 00:12:14.109 verify_backlog=512 00:12:14.109 verify_state_save=0 00:12:14.109 do_verify=1 00:12:14.109 verify=crc32c-intel 00:12:14.109 [job0] 00:12:14.109 filename=/dev/nvme0n1 00:12:14.109 Could not set queue depth (nvme0n1) 00:12:14.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.365 fio-3.35 00:12:14.365 Starting 1 thread 00:12:15.736 00:12:15.736 job0: (groupid=0, jobs=1): err= 0: pid=508447: Tue May 14 23:58:44 2024 00:12:15.736 read: IOPS=6656, BW=26.0MiB/s (27.3MB/s)(26.0MiB/1000msec) 00:12:15.736 slat (nsec): min=5058, max=33295, avg=6838.99, stdev=2307.60 00:12:15.736 clat (usec): min=44, max=117, avg=67.16, stdev= 8.26 00:12:15.736 lat (usec): min=60, max=125, avg=74.00, stdev= 9.03 00:12:15.736 clat percentiles (usec): 00:12:15.736 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:12:15.736 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:12:15.736 | 70.00th=[ 68], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 87], 00:12:15.736 | 99.00th=[ 95], 99.50th=[ 98], 99.90th=[ 105], 99.95th=[ 109], 00:12:15.736 | 99.99th=[ 119] 00:12:15.736 write: IOPS=6828, BW=26.7MiB/s (28.0MB/s)(26.7MiB/1000msec); 0 zone resets 00:12:15.736 slat (nsec): min=6150, max=48348, avg=7833.21, stdev=2305.29 00:12:15.736 clat (usec): min=51, max=111, avg=62.66, stdev= 8.23 00:12:15.736 lat (usec): min=58, max=152, avg=70.50, stdev= 9.05 00:12:15.736 clat percentiles (usec): 00:12:15.736 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 56], 20.00th=[ 58], 00:12:15.736 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:12:15.736 | 70.00th=[ 63], 80.00th=[ 67], 90.00th=[ 76], 95.00th=[ 82], 00:12:15.736 | 99.00th=[ 90], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 106], 00:12:15.736 | 99.99th=[ 113] 00:12:15.736 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:12:15.736 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:12:15.736 lat (usec) : 50=0.01%, 100=99.69%, 250=0.30% 00:12:15.736 cpu : usr=4.90%, sys=10.30%, ctx=13484, majf=0, minf=2 00:12:15.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.736 issued rwts: total=6656,6828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.736 00:12:15.736 Run status group 0 (all jobs): 00:12:15.736 READ: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=26.0MiB (27.3MB), run=1000-1000msec 00:12:15.736 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=26.7MiB (28.0MB), run=1000-1000msec 00:12:15.736 00:12:15.736 Disk stats (read/write): 00:12:15.736 nvme0n1: ios=6126/6144, merge=0/0, ticks=412/392, in_queue=804, util=90.78% 00:12:15.736 23:58:44 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:19.912 rmmod nvme_rdma 00:12:19.912 rmmod nvme_fabrics 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 507120 ']' 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 507120 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 507120 ']' 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 507120 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:19.912 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 507120 00:12:20.170 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:20.170 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:20.170 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 507120' 00:12:20.170 killing process with pid 507120 00:12:20.170 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 507120 00:12:20.170 [2024-05-14 23:58:49.274041] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:20.170 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 507120 00:12:20.170 [2024-05-14 23:58:49.365532] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:20.427 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.427 23:58:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:20.427 00:12:20.427 real 0m18.976s 00:12:20.427 user 1m4.904s 00:12:20.427 sys 0m2.677s 00:12:20.427 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.427 23:58:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 ************************************ 00:12:20.427 END TEST nvmf_nmic 00:12:20.427 ************************************ 00:12:20.427 23:58:49 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:20.427 23:58:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:20.427 23:58:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:20.427 23:58:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 ************************************ 00:12:20.427 START TEST nvmf_fio_target 00:12:20.427 ************************************ 00:12:20.427 23:58:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:20.685 * Looking for test storage... 00:12:20.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.685 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.686 23:58:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:23.215 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:23.215 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.215 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:23.216 Found net devices under 0000:09:00.0: mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:23.216 Found net devices under 0000:09:00.1: mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:23.216 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.216 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:23.216 altname enp9s0f0np0 00:12:23.216 inet 192.168.100.8/24 scope global mlx_0_0 00:12:23.216 valid_lft forever preferred_lft forever 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:23.216 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.216 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:23.216 altname enp9s0f1np1 00:12:23.216 inet 192.168.100.9/24 scope global mlx_0_1 00:12:23.216 valid_lft forever preferred_lft forever 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:23.216 192.168.100.9' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:23.216 192.168.100.9' 00:12:23.216 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:23.217 192.168.100.9' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=511212 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 511212 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 511212 ']' 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.217 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.217 [2024-05-14 23:58:52.401905] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:12:23.217 [2024-05-14 23:58:52.402008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.217 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.217 [2024-05-14 23:58:52.475472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.474 [2024-05-14 23:58:52.593894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.474 [2024-05-14 23:58:52.593956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.474 [2024-05-14 23:58:52.593974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.474 [2024-05-14 23:58:52.593988] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.475 [2024-05-14 23:58:52.594000] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.475 [2024-05-14 23:58:52.594077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.475 [2024-05-14 23:58:52.594107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.475 [2024-05-14 23:58:52.594228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.475 [2024-05-14 23:58:52.594231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.475 23:58:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:23.733 [2024-05-14 23:58:52.975699] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20f7a20/0x20fbf10) succeed. 00:12:23.733 [2024-05-14 23:58:52.986284] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20f9060/0x213d5a0) succeed. 00:12:23.990 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.247 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:24.247 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.505 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:24.505 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.763 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:24.763 23:58:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.021 23:58:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:25.021 23:58:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:25.278 23:58:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.534 23:58:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:25.534 23:58:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.791 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:25.791 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.048 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:26.048 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:26.325 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.584 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.584 23:58:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.841 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.841 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.098 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.355 [2024-05-14 23:58:56.487144] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.355 [2024-05-14 23:58:56.487510] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.355 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:27.612 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:27.869 23:58:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:12:32.046 23:59:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:12:33.444 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:33.444 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:33.444 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.444 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:12:33.445 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.445 23:59:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:12:33.445 23:59:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:33.445 [global] 00:12:33.445 thread=1 00:12:33.445 invalidate=1 00:12:33.445 rw=write 00:12:33.445 time_based=1 00:12:33.445 runtime=1 00:12:33.445 ioengine=libaio 00:12:33.445 direct=1 00:12:33.445 bs=4096 00:12:33.445 iodepth=1 00:12:33.445 norandommap=0 00:12:33.445 numjobs=1 00:12:33.445 00:12:33.445 verify_dump=1 00:12:33.445 verify_backlog=512 00:12:33.445 verify_state_save=0 00:12:33.445 do_verify=1 00:12:33.445 verify=crc32c-intel 00:12:33.445 [job0] 00:12:33.445 filename=/dev/nvme0n1 00:12:33.445 [job1] 00:12:33.445 filename=/dev/nvme0n2 00:12:33.445 [job2] 00:12:33.445 filename=/dev/nvme0n3 00:12:33.445 [job3] 00:12:33.445 filename=/dev/nvme0n4 00:12:33.445 Could not set queue depth (nvme0n1) 00:12:33.445 Could not set queue depth (nvme0n2) 00:12:33.445 Could not set queue depth (nvme0n3) 00:12:33.445 Could not set queue depth (nvme0n4) 00:12:33.702 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.702 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.702 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.702 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.702 fio-3.35 00:12:33.702 Starting 4 threads 00:12:35.073 00:12:35.073 job0: (groupid=0, jobs=1): err= 0: pid=512577: Tue May 14 23:59:04 2024 00:12:35.073 read: IOPS=4437, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1001msec) 00:12:35.073 slat (nsec): min=4679, max=34952, avg=7240.44, stdev=3172.08 00:12:35.073 clat (usec): min=79, max=224, avg=102.61, stdev=16.59 00:12:35.073 lat (usec): min=85, max=231, avg=109.85, stdev=18.04 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 91], 00:12:35.073 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 98], 60.00th=[ 101], 00:12:35.073 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 122], 95.00th=[ 141], 00:12:35.073 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 202], 99.95th=[ 212], 00:12:35.073 | 99.99th=[ 225] 00:12:35.073 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:12:35.073 slat (nsec): min=5346, max=37824, avg=8343.47, stdev=3558.66 00:12:35.073 clat (usec): min=74, max=227, avg=98.70, stdev=17.51 00:12:35.073 lat (usec): min=80, max=258, avg=107.05, stdev=19.20 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 87], 00:12:35.073 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:12:35.073 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 124], 95.00th=[ 137], 00:12:35.073 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 194], 00:12:35.073 | 99.99th=[ 229] 00:12:35.073 bw ( KiB/s): min=17624, max=17624, per=25.83%, avg=17624.00, stdev= 0.00, samples=1 00:12:35.073 iops : min= 4406, max= 4406, avg=4406.00, stdev= 0.00, samples=1 00:12:35.073 lat (usec) : 100=63.51%, 250=36.49% 00:12:35.073 cpu : usr=4.10%, sys=6.80%, ctx=9051, majf=0, minf=2 00:12:35.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 issued rwts: total=4442,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:35.073 job1: (groupid=0, jobs=1): err= 0: pid=512578: Tue May 14 23:59:04 2024 00:12:35.073 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1001msec) 00:12:35.073 slat (nsec): min=4609, max=31115, avg=7748.39, stdev=3243.31 00:12:35.073 clat (usec): min=69, max=536, avg=116.13, stdev=32.34 00:12:35.073 lat (usec): min=75, max=542, avg=123.88, stdev=31.97 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 84], 00:12:35.073 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 119], 60.00th=[ 135], 00:12:35.073 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 165], 00:12:35.073 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 217], 99.95th=[ 223], 00:12:35.073 | 99.99th=[ 537] 00:12:35.073 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:35.073 slat (nsec): min=5564, max=34276, avg=8906.98, stdev=3590.89 00:12:35.073 clat (usec): min=63, max=212, avg=107.18, stdev=28.72 00:12:35.073 lat (usec): min=70, max=218, avg=116.08, stdev=28.20 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 79], 00:12:35.073 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 125], 00:12:35.073 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 151], 00:12:35.073 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 204], 00:12:35.073 | 99.99th=[ 212] 00:12:35.073 bw ( KiB/s): min=20480, max=20480, per=30.02%, avg=20480.00, stdev= 0.00, samples=1 00:12:35.073 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:35.073 lat (usec) : 100=48.53%, 250=51.44%, 500=0.01%, 750=0.01% 00:12:35.073 cpu : usr=3.40%, sys=7.10%, ctx=8184, majf=0, minf=1 00:12:35.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 issued rwts: total=4088,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:35.073 job2: (groupid=0, jobs=1): err= 0: pid=512579: Tue May 14 23:59:04 2024 00:12:35.073 read: IOPS=3560, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec) 00:12:35.073 slat (nsec): min=4770, max=38785, avg=7698.29, stdev=3385.30 00:12:35.073 clat (usec): min=90, max=469, avg=133.64, stdev=21.39 00:12:35.073 lat (usec): min=96, max=475, avg=141.33, stdev=21.89 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 111], 00:12:35.073 | 30.00th=[ 124], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:12:35.073 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 167], 00:12:35.073 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 208], 99.95th=[ 227], 00:12:35.073 | 99.99th=[ 469] 00:12:35.073 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:35.073 slat (nsec): min=5399, max=37509, avg=9344.94, stdev=3980.53 00:12:35.073 clat (usec): min=82, max=226, avg=124.85, stdev=18.45 00:12:35.073 lat (usec): min=90, max=256, avg=134.19, stdev=18.62 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 90], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 105], 00:12:35.073 | 30.00th=[ 119], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:12:35.073 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:12:35.073 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 190], 00:12:35.073 | 99.99th=[ 227] 00:12:35.073 bw ( KiB/s): min=16384, max=16384, per=24.01%, avg=16384.00, stdev= 0.00, samples=1 00:12:35.073 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:35.073 lat (usec) : 100=8.86%, 250=91.13%, 500=0.01% 00:12:35.073 cpu : usr=3.00%, sys=6.30%, ctx=7149, majf=0, minf=1 00:12:35.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.073 issued rwts: total=3564,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:35.073 job3: (groupid=0, jobs=1): err= 0: pid=512581: Tue May 14 23:59:04 2024 00:12:35.073 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:35.073 slat (nsec): min=4751, max=37601, avg=7106.63, stdev=3012.38 00:12:35.073 clat (usec): min=81, max=143, avg=99.06, stdev= 9.51 00:12:35.073 lat (usec): min=86, max=148, avg=106.17, stdev=10.33 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], 00:12:35.073 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 99], 00:12:35.073 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 119], 00:12:35.073 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 135], 99.95th=[ 137], 00:12:35.073 | 99.99th=[ 145] 00:12:35.073 write: IOPS=4782, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1001msec); 0 zone resets 00:12:35.073 slat (nsec): min=5765, max=42421, avg=8756.81, stdev=3600.03 00:12:35.073 clat (usec): min=76, max=220, avg=93.84, stdev=10.22 00:12:35.073 lat (usec): min=82, max=229, avg=102.60, stdev=11.44 00:12:35.073 clat percentiles (usec): 00:12:35.073 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:12:35.073 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:12:35.073 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 114], 00:12:35.073 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 147], 99.95th=[ 174], 00:12:35.073 | 99.99th=[ 221] 00:12:35.073 bw ( KiB/s): min=20480, max=20480, per=30.02%, avg=20480.00, stdev= 0.00, samples=1 00:12:35.073 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:35.073 lat (usec) : 100=71.04%, 250=28.96% 00:12:35.073 cpu : usr=3.70%, sys=7.80%, ctx=9396, majf=0, minf=1 00:12:35.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.074 issued rwts: total=4608,4787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:35.074 00:12:35.074 Run status group 0 (all jobs): 00:12:35.074 READ: bw=65.2MiB/s (68.3MB/s), 13.9MiB/s-18.0MiB/s (14.6MB/s-18.9MB/s), io=65.2MiB (68.4MB), run=1001-1001msec 00:12:35.074 WRITE: bw=66.6MiB/s (69.9MB/s), 14.0MiB/s-18.7MiB/s (14.7MB/s-19.6MB/s), io=66.7MiB (69.9MB), run=1001-1001msec 00:12:35.074 00:12:35.074 Disk stats (read/write): 00:12:35.074 nvme0n1: ios=3634/4050, merge=0/0, ticks=363/406, in_queue=769, util=86.27% 00:12:35.074 nvme0n2: ios=3589/3604, merge=0/0, ticks=391/376, in_queue=767, util=86.79% 00:12:35.074 nvme0n3: ios=3072/3078, merge=0/0, ticks=385/381, in_queue=766, util=89.05% 00:12:35.074 nvme0n4: ios=3883/4096, merge=0/0, ticks=384/391, in_queue=775, util=89.71% 00:12:35.074 23:59:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:35.074 [global] 00:12:35.074 thread=1 00:12:35.074 invalidate=1 00:12:35.074 rw=randwrite 00:12:35.074 time_based=1 00:12:35.074 runtime=1 00:12:35.074 ioengine=libaio 00:12:35.074 direct=1 00:12:35.074 bs=4096 00:12:35.074 iodepth=1 00:12:35.074 norandommap=0 00:12:35.074 numjobs=1 00:12:35.074 00:12:35.074 verify_dump=1 00:12:35.074 verify_backlog=512 00:12:35.074 verify_state_save=0 00:12:35.074 do_verify=1 00:12:35.074 verify=crc32c-intel 00:12:35.074 [job0] 00:12:35.074 filename=/dev/nvme0n1 00:12:35.074 [job1] 00:12:35.074 filename=/dev/nvme0n2 00:12:35.074 [job2] 00:12:35.074 filename=/dev/nvme0n3 00:12:35.074 [job3] 00:12:35.074 filename=/dev/nvme0n4 00:12:35.074 Could not set queue depth (nvme0n1) 00:12:35.074 Could not set queue depth (nvme0n2) 00:12:35.074 Could not set queue depth (nvme0n3) 00:12:35.074 Could not set queue depth (nvme0n4) 00:12:35.074 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:35.074 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:35.074 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:35.074 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:35.074 fio-3.35 00:12:35.074 Starting 4 threads 00:12:36.453 00:12:36.453 job0: (groupid=0, jobs=1): err= 0: pid=512811: Tue May 14 23:59:05 2024 00:12:36.453 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:12:36.453 slat (nsec): min=4669, max=28472, avg=7810.56, stdev=2695.56 00:12:36.453 clat (usec): min=69, max=123, avg=84.14, stdev= 6.83 00:12:36.453 lat (usec): min=75, max=131, avg=91.95, stdev= 8.03 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 79], 00:12:36.453 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 85], 00:12:36.453 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 93], 95.00th=[ 97], 00:12:36.453 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 120], 00:12:36.453 | 99.99th=[ 124] 00:12:36.453 write: IOPS=5557, BW=21.7MiB/s (22.8MB/s)(21.7MiB/1001msec); 0 zone resets 00:12:36.453 slat (nsec): min=5466, max=36946, avg=9852.78, stdev=3594.71 00:12:36.453 clat (usec): min=64, max=220, avg=80.74, stdev=10.39 00:12:36.453 lat (usec): min=71, max=252, avg=90.59, stdev=11.62 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:12:36.453 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:12:36.453 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 91], 95.00th=[ 98], 00:12:36.453 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 153], 99.95th=[ 184], 00:12:36.453 | 99.99th=[ 221] 00:12:36.453 bw ( KiB/s): min=23128, max=23128, per=35.78%, avg=23128.00, stdev= 0.00, samples=1 00:12:36.453 iops : min= 5782, max= 5782, avg=5782.00, stdev= 0.00, samples=1 00:12:36.453 lat (usec) : 100=96.19%, 250=3.81% 00:12:36.453 cpu : usr=5.80%, sys=8.90%, ctx=10683, majf=0, minf=1 00:12:36.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 issued rwts: total=5120,5563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.453 job1: (groupid=0, jobs=1): err= 0: pid=512812: Tue May 14 23:59:05 2024 00:12:36.453 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:36.453 slat (nsec): min=5188, max=27757, avg=8902.06, stdev=2991.59 00:12:36.453 clat (usec): min=72, max=263, avg=146.25, stdev=35.07 00:12:36.453 lat (usec): min=78, max=271, avg=155.15, stdev=35.73 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 108], 00:12:36.453 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:12:36.453 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 208], 00:12:36.453 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 239], 99.95th=[ 245], 00:12:36.453 | 99.99th=[ 265] 00:12:36.453 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:12:36.453 slat (nsec): min=5968, max=32893, avg=10328.26, stdev=3745.77 00:12:36.453 clat (usec): min=65, max=241, avg=137.46, stdev=29.82 00:12:36.453 lat (usec): min=72, max=249, avg=147.78, stdev=30.54 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 88], 20.00th=[ 114], 00:12:36.453 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:12:36.453 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 190], 00:12:36.453 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 235], 99.95th=[ 241], 00:12:36.453 | 99.99th=[ 241] 00:12:36.453 bw ( KiB/s): min=12368, max=12368, per=19.13%, avg=12368.00, stdev= 0.00, samples=1 00:12:36.453 iops : min= 3092, max= 3092, avg=3092.00, stdev= 0.00, samples=1 00:12:36.453 lat (usec) : 100=15.06%, 250=84.92%, 500=0.02% 00:12:36.453 cpu : usr=3.60%, sys=6.10%, ctx=6499, majf=0, minf=1 00:12:36.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 issued rwts: total=3072,3427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.453 job2: (groupid=0, jobs=1): err= 0: pid=512813: Tue May 14 23:59:05 2024 00:12:36.453 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:36.453 slat (nsec): min=4980, max=37457, avg=8681.93, stdev=2987.17 00:12:36.453 clat (usec): min=85, max=254, avg=114.93, stdev=21.38 00:12:36.453 lat (usec): min=92, max=261, avg=123.61, stdev=21.49 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 103], 00:12:36.453 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:12:36.453 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 145], 95.00th=[ 169], 00:12:36.453 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 227], 99.95th=[ 235], 00:12:36.453 | 99.99th=[ 255] 00:12:36.453 write: IOPS=4110, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1001msec); 0 zone resets 00:12:36.453 slat (nsec): min=5784, max=38802, avg=10314.65, stdev=3895.33 00:12:36.453 clat (usec): min=80, max=467, avg=104.67, stdev=15.12 00:12:36.453 lat (usec): min=87, max=476, avg=114.98, stdev=15.93 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:12:36.453 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:12:36.453 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 119], 95.00th=[ 129], 00:12:36.453 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 198], 99.95th=[ 225], 00:12:36.453 | 99.99th=[ 469] 00:12:36.453 bw ( KiB/s): min=18288, max=18288, per=28.29%, avg=18288.00, stdev= 0.00, samples=1 00:12:36.453 iops : min= 4572, max= 4572, avg=4572.00, stdev= 0.00, samples=1 00:12:36.453 lat (usec) : 100=25.98%, 250=73.99%, 500=0.04% 00:12:36.453 cpu : usr=4.10%, sys=7.90%, ctx=8211, majf=0, minf=2 00:12:36.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.453 issued rwts: total=4096,4115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.453 job3: (groupid=0, jobs=1): err= 0: pid=512814: Tue May 14 23:59:05 2024 00:12:36.453 read: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:12:36.453 slat (nsec): min=5331, max=39310, avg=9064.63, stdev=3432.92 00:12:36.453 clat (usec): min=95, max=250, avg=156.28, stdev=16.14 00:12:36.453 lat (usec): min=104, max=258, avg=165.34, stdev=16.38 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 111], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:12:36.453 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:12:36.453 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 184], 00:12:36.453 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 245], 00:12:36.453 | 99.99th=[ 251] 00:12:36.453 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:36.453 slat (nsec): min=6072, max=33468, avg=10940.57, stdev=3929.03 00:12:36.453 clat (usec): min=91, max=297, avg=149.13, stdev=17.34 00:12:36.453 lat (usec): min=102, max=304, avg=160.07, stdev=17.08 00:12:36.453 clat percentiles (usec): 00:12:36.453 | 1.00th=[ 109], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 137], 00:12:36.453 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:12:36.453 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 184], 00:12:36.454 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 255], 99.95th=[ 269], 00:12:36.454 | 99.99th=[ 297] 00:12:36.454 bw ( KiB/s): min=12360, max=12360, per=19.12%, avg=12360.00, stdev= 0.00, samples=1 00:12:36.454 iops : min= 3090, max= 3090, avg=3090.00, stdev= 0.00, samples=1 00:12:36.454 lat (usec) : 100=0.33%, 250=99.57%, 500=0.10% 00:12:36.454 cpu : usr=2.80%, sys=6.60%, ctx=6051, majf=0, minf=1 00:12:36.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.454 issued rwts: total=2979,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.454 00:12:36.454 Run status group 0 (all jobs): 00:12:36.454 READ: bw=59.6MiB/s (62.5MB/s), 11.6MiB/s-20.0MiB/s (12.2MB/s-20.9MB/s), io=59.6MiB (62.5MB), run=1001-1001msec 00:12:36.454 WRITE: bw=63.1MiB/s (66.2MB/s), 12.0MiB/s-21.7MiB/s (12.6MB/s-22.8MB/s), io=63.2MiB (66.3MB), run=1001-1001msec 00:12:36.454 00:12:36.454 Disk stats (read/write): 00:12:36.454 nvme0n1: ios=4553/4608, merge=0/0, ticks=380/355, in_queue=735, util=86.37% 00:12:36.454 nvme0n2: ios=2560/3051, merge=0/0, ticks=371/405, in_queue=776, util=86.80% 00:12:36.454 nvme0n3: ios=3392/3584, merge=0/0, ticks=377/364, in_queue=741, util=89.06% 00:12:36.454 nvme0n4: ios=2560/2604, merge=0/0, ticks=380/389, in_queue=769, util=89.62% 00:12:36.454 23:59:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:36.454 [global] 00:12:36.454 thread=1 00:12:36.454 invalidate=1 00:12:36.454 rw=write 00:12:36.454 time_based=1 00:12:36.454 runtime=1 00:12:36.454 ioengine=libaio 00:12:36.454 direct=1 00:12:36.454 bs=4096 00:12:36.454 iodepth=128 00:12:36.454 norandommap=0 00:12:36.454 numjobs=1 00:12:36.454 00:12:36.454 verify_dump=1 00:12:36.454 verify_backlog=512 00:12:36.454 verify_state_save=0 00:12:36.454 do_verify=1 00:12:36.454 verify=crc32c-intel 00:12:36.454 [job0] 00:12:36.454 filename=/dev/nvme0n1 00:12:36.454 [job1] 00:12:36.454 filename=/dev/nvme0n2 00:12:36.454 [job2] 00:12:36.454 filename=/dev/nvme0n3 00:12:36.454 [job3] 00:12:36.454 filename=/dev/nvme0n4 00:12:36.454 Could not set queue depth (nvme0n1) 00:12:36.454 Could not set queue depth (nvme0n2) 00:12:36.454 Could not set queue depth (nvme0n3) 00:12:36.454 Could not set queue depth (nvme0n4) 00:12:36.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:36.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:36.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:36.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:36.712 fio-3.35 00:12:36.712 Starting 4 threads 00:12:38.087 00:12:38.087 job0: (groupid=0, jobs=1): err= 0: pid=513158: Tue May 14 23:59:07 2024 00:12:38.087 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:12:38.087 slat (usec): min=3, max=6867, avg=64.57, stdev=316.47 00:12:38.087 clat (usec): min=736, max=25185, avg=9438.64, stdev=4788.16 00:12:38.087 lat (usec): min=1000, max=25190, avg=9503.21, stdev=4816.05 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 3064], 5.00th=[ 4948], 10.00th=[ 6063], 20.00th=[ 7111], 00:12:38.087 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:12:38.087 | 70.00th=[ 8291], 80.00th=[ 9110], 90.00th=[19006], 95.00th=[21890], 00:12:38.087 | 99.00th=[23200], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:12:38.087 | 99.99th=[25297] 00:12:38.087 write: IOPS=7183, BW=28.1MiB/s (29.4MB/s)(28.2MiB/1004msec); 0 zone resets 00:12:38.087 slat (usec): min=3, max=5254, avg=55.18, stdev=245.42 00:12:38.087 clat (usec): min=606, max=21782, avg=8302.48, stdev=3964.06 00:12:38.087 lat (usec): min=952, max=21788, avg=8357.65, stdev=3992.18 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 3228], 5.00th=[ 4686], 10.00th=[ 5669], 20.00th=[ 6587], 00:12:38.087 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:12:38.087 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[12256], 95.00th=[20841], 00:12:38.087 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:12:38.087 | 99.99th=[21890] 00:12:38.087 bw ( KiB/s): min=24560, max=32784, per=31.54%, avg=28672.00, stdev=5815.25, samples=2 00:12:38.087 iops : min= 6140, max= 8196, avg=7168.00, stdev=1453.81, samples=2 00:12:38.087 lat (usec) : 750=0.01%, 1000=0.02% 00:12:38.087 lat (msec) : 2=0.40%, 4=2.59%, 10=81.94%, 20=6.88%, 50=8.16% 00:12:38.087 cpu : usr=8.28%, sys=10.77%, ctx=920, majf=0, minf=1 00:12:38.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:38.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:38.087 issued rwts: total=7168,7212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:38.087 job1: (groupid=0, jobs=1): err= 0: pid=513159: Tue May 14 23:59:07 2024 00:12:38.087 read: IOPS=3778, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1005msec) 00:12:38.087 slat (usec): min=3, max=7110, avg=129.49, stdev=539.54 00:12:38.087 clat (usec): min=3379, max=28385, avg=16549.93, stdev=5703.15 00:12:38.087 lat (usec): min=5785, max=28421, avg=16679.42, stdev=5745.30 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 7046], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 00:12:38.087 | 30.00th=[12649], 40.00th=[16319], 50.00th=[17695], 60.00th=[20317], 00:12:38.087 | 70.00th=[21103], 80.00th=[21627], 90.00th=[22414], 95.00th=[23725], 00:12:38.087 | 99.00th=[25035], 99.50th=[25822], 99.90th=[28181], 99.95th=[28181], 00:12:38.087 | 99.99th=[28443] 00:12:38.087 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:12:38.087 slat (usec): min=3, max=5694, avg=115.85, stdev=505.82 00:12:38.087 clat (usec): min=4994, max=29817, avg=15720.97, stdev=5883.45 00:12:38.087 lat (usec): min=4999, max=29851, avg=15836.82, stdev=5929.00 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 7832], 20.00th=[ 8160], 00:12:38.087 | 30.00th=[11469], 40.00th=[14484], 50.00th=[16188], 60.00th=[18482], 00:12:38.087 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22676], 95.00th=[24249], 00:12:38.087 | 99.00th=[25560], 99.50th=[26346], 99.90th=[28705], 99.95th=[29230], 00:12:38.087 | 99.99th=[29754] 00:12:38.087 bw ( KiB/s): min=14640, max=18128, per=18.03%, avg=16384.00, stdev=2466.39, samples=2 00:12:38.087 iops : min= 3660, max= 4532, avg=4096.00, stdev=616.60, samples=2 00:12:38.087 lat (msec) : 4=0.01%, 10=26.02%, 20=36.18%, 50=37.78% 00:12:38.087 cpu : usr=4.68%, sys=6.27%, ctx=724, majf=0, minf=1 00:12:38.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:38.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:38.087 issued rwts: total=3797,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:38.087 job2: (groupid=0, jobs=1): err= 0: pid=513160: Tue May 14 23:59:07 2024 00:12:38.087 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:12:38.087 slat (usec): min=3, max=3955, avg=82.29, stdev=323.58 00:12:38.087 clat (usec): min=6849, max=22446, avg=10961.20, stdev=3745.85 00:12:38.087 lat (usec): min=6860, max=22465, avg=11043.49, stdev=3767.76 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:12:38.087 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:12:38.087 | 70.00th=[10421], 80.00th=[11731], 90.00th=[18220], 95.00th=[21365], 00:12:38.087 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22414], 99.95th=[22414], 00:12:38.087 | 99.99th=[22414] 00:12:38.087 write: IOPS=5867, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1005msec); 0 zone resets 00:12:38.087 slat (usec): min=3, max=5690, avg=82.95, stdev=343.50 00:12:38.087 clat (usec): min=1406, max=23733, avg=11083.02, stdev=4092.66 00:12:38.087 lat (usec): min=5308, max=23739, avg=11165.97, stdev=4114.62 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8586], 00:12:38.087 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9896], 00:12:38.087 | 70.00th=[10945], 80.00th=[12256], 90.00th=[20579], 95.00th=[21890], 00:12:38.087 | 99.00th=[22676], 99.50th=[22938], 99.90th=[22938], 99.95th=[23725], 00:12:38.087 | 99.99th=[23725] 00:12:38.087 bw ( KiB/s): min=17600, max=28552, per=25.39%, avg=23076.00, stdev=7744.23, samples=2 00:12:38.087 iops : min= 4400, max= 7138, avg=5769.00, stdev=1936.06, samples=2 00:12:38.087 lat (msec) : 2=0.01%, 10=63.01%, 20=27.37%, 50=9.61% 00:12:38.087 cpu : usr=5.88%, sys=9.66%, ctx=896, majf=0, minf=1 00:12:38.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:38.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:38.087 issued rwts: total=5632,5897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:38.087 job3: (groupid=0, jobs=1): err= 0: pid=513161: Tue May 14 23:59:07 2024 00:12:38.087 read: IOPS=5431, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1002msec) 00:12:38.087 slat (usec): min=3, max=5065, avg=88.46, stdev=344.85 00:12:38.087 clat (usec): min=873, max=21126, avg=11709.95, stdev=3810.40 00:12:38.087 lat (usec): min=1597, max=21143, avg=11798.41, stdev=3842.67 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 4178], 5.00th=[ 5866], 10.00th=[ 6915], 20.00th=[ 8094], 00:12:38.087 | 30.00th=[ 8717], 40.00th=[10159], 50.00th=[10945], 60.00th=[13960], 00:12:38.087 | 70.00th=[14615], 80.00th=[15664], 90.00th=[16712], 95.00th=[17171], 00:12:38.087 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19792], 99.95th=[20841], 00:12:38.087 | 99.99th=[21103] 00:12:38.087 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:38.087 slat (usec): min=3, max=5313, avg=83.76, stdev=320.38 00:12:38.087 clat (usec): min=3470, max=20714, avg=11191.45, stdev=3662.83 00:12:38.087 lat (usec): min=3627, max=20747, avg=11275.21, stdev=3694.16 00:12:38.087 clat percentiles (usec): 00:12:38.087 | 1.00th=[ 4621], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7504], 00:12:38.087 | 30.00th=[ 8094], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[13042], 00:12:38.087 | 70.00th=[13829], 80.00th=[15270], 90.00th=[16188], 95.00th=[16581], 00:12:38.087 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19530], 99.95th=[20317], 00:12:38.087 | 99.99th=[20841] 00:12:38.087 bw ( KiB/s): min=16384, max=28672, per=24.79%, avg=22528.00, stdev=8688.93, samples=2 00:12:38.087 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:12:38.087 lat (usec) : 1000=0.01% 00:12:38.087 lat (msec) : 2=0.18%, 4=0.33%, 10=41.55%, 20=57.87%, 50=0.07% 00:12:38.087 cpu : usr=5.49%, sys=9.39%, ctx=785, majf=0, minf=1 00:12:38.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:38.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:38.087 issued rwts: total=5442,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:38.087 00:12:38.087 Run status group 0 (all jobs): 00:12:38.087 READ: bw=85.7MiB/s (89.8MB/s), 14.8MiB/s-27.9MiB/s (15.5MB/s-29.2MB/s), io=86.1MiB (90.3MB), run=1002-1005msec 00:12:38.087 WRITE: bw=88.8MiB/s (93.1MB/s), 15.9MiB/s-28.1MiB/s (16.7MB/s-29.4MB/s), io=89.2MiB (93.5MB), run=1002-1005msec 00:12:38.087 00:12:38.087 Disk stats (read/write): 00:12:38.087 nvme0n1: ios=6706/6807, merge=0/0, ticks=46588/44987, in_queue=91575, util=86.27% 00:12:38.087 nvme0n2: ios=3412/3584, merge=0/0, ticks=18891/19007, in_queue=37898, util=86.59% 00:12:38.087 nvme0n3: ios=5120/5507, merge=0/0, ticks=12958/14609, in_queue=27567, util=89.05% 00:12:38.087 nvme0n4: ios=4096/4376, merge=0/0, ticks=22658/22512, in_queue=45170, util=89.71% 00:12:38.087 23:59:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:38.087 [global] 00:12:38.087 thread=1 00:12:38.087 invalidate=1 00:12:38.087 rw=randwrite 00:12:38.087 time_based=1 00:12:38.087 runtime=1 00:12:38.087 ioengine=libaio 00:12:38.087 direct=1 00:12:38.087 bs=4096 00:12:38.087 iodepth=128 00:12:38.087 norandommap=0 00:12:38.087 numjobs=1 00:12:38.087 00:12:38.087 verify_dump=1 00:12:38.087 verify_backlog=512 00:12:38.087 verify_state_save=0 00:12:38.087 do_verify=1 00:12:38.087 verify=crc32c-intel 00:12:38.087 [job0] 00:12:38.087 filename=/dev/nvme0n1 00:12:38.087 [job1] 00:12:38.087 filename=/dev/nvme0n2 00:12:38.087 [job2] 00:12:38.087 filename=/dev/nvme0n3 00:12:38.087 [job3] 00:12:38.087 filename=/dev/nvme0n4 00:12:38.087 Could not set queue depth (nvme0n1) 00:12:38.087 Could not set queue depth (nvme0n2) 00:12:38.087 Could not set queue depth (nvme0n3) 00:12:38.087 Could not set queue depth (nvme0n4) 00:12:38.087 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.088 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.088 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.088 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:38.088 fio-3.35 00:12:38.088 Starting 4 threads 00:12:39.462 00:12:39.462 job0: (groupid=0, jobs=1): err= 0: pid=513385: Tue May 14 23:59:08 2024 00:12:39.462 read: IOPS=5031, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1005msec) 00:12:39.462 slat (usec): min=3, max=5901, avg=94.56, stdev=407.52 00:12:39.462 clat (usec): min=3905, max=24338, avg=12456.26, stdev=5607.27 00:12:39.462 lat (usec): min=3931, max=26193, avg=12550.81, stdev=5638.57 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7635], 00:12:39.462 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[10683], 60.00th=[12649], 00:12:39.462 | 70.00th=[14746], 80.00th=[17957], 90.00th=[22676], 95.00th=[22938], 00:12:39.462 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:12:39.462 | 99.99th=[24249] 00:12:39.462 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:39.462 slat (usec): min=3, max=4360, avg=94.11, stdev=364.77 00:12:39.462 clat (usec): min=2952, max=23772, avg=12455.46, stdev=5252.42 00:12:39.462 lat (usec): min=2971, max=23812, avg=12549.57, stdev=5286.38 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 4948], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7373], 00:12:39.462 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[11207], 60.00th=[14877], 00:12:39.462 | 70.00th=[16319], 80.00th=[17171], 90.00th=[20579], 95.00th=[22414], 00:12:39.462 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23725], 99.95th=[23725], 00:12:39.462 | 99.99th=[23725] 00:12:39.462 bw ( KiB/s): min=20480, max=20521, per=23.95%, avg=20500.50, stdev=28.99, samples=2 00:12:39.462 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:12:39.462 lat (msec) : 4=0.17%, 10=46.57%, 20=39.71%, 50=13.56% 00:12:39.462 cpu : usr=4.98%, sys=7.57%, ctx=975, majf=0, minf=1 00:12:39.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:39.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.462 issued rwts: total=5057,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.462 job1: (groupid=0, jobs=1): err= 0: pid=513386: Tue May 14 23:59:08 2024 00:12:39.462 read: IOPS=5549, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1005msec) 00:12:39.462 slat (usec): min=2, max=3573, avg=87.37, stdev=316.70 00:12:39.462 clat (usec): min=3826, max=26109, avg=11500.18, stdev=5343.91 00:12:39.462 lat (usec): min=4159, max=26115, avg=11587.55, stdev=5377.62 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7570], 00:12:39.462 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10421], 00:12:39.462 | 70.00th=[11994], 80.00th=[15533], 90.00th=[22152], 95.00th=[22676], 00:12:39.462 | 99.00th=[23462], 99.50th=[23725], 99.90th=[26084], 99.95th=[26084], 00:12:39.462 | 99.99th=[26084] 00:12:39.462 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:12:39.462 slat (usec): min=3, max=4939, avg=83.45, stdev=318.66 00:12:39.462 clat (usec): min=3174, max=24445, avg=11195.29, stdev=5234.81 00:12:39.462 lat (usec): min=3182, max=25294, avg=11278.75, stdev=5270.09 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6652], 00:12:39.462 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 9503], 60.00th=[11863], 00:12:39.462 | 70.00th=[13042], 80.00th=[16057], 90.00th=[19530], 95.00th=[22676], 00:12:39.462 | 99.00th=[23200], 99.50th=[23462], 99.90th=[24249], 99.95th=[24511], 00:12:39.462 | 99.99th=[24511] 00:12:39.462 bw ( KiB/s): min=16384, max=28672, per=26.32%, avg=22528.00, stdev=8688.93, samples=2 00:12:39.462 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:12:39.462 lat (msec) : 4=0.35%, 10=54.33%, 20=33.13%, 50=12.20% 00:12:39.462 cpu : usr=5.58%, sys=7.67%, ctx=935, majf=0, minf=1 00:12:39.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:39.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.462 issued rwts: total=5577,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.462 job2: (groupid=0, jobs=1): err= 0: pid=513387: Tue May 14 23:59:08 2024 00:12:39.462 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1005msec) 00:12:39.462 slat (usec): min=3, max=5573, avg=99.08, stdev=405.05 00:12:39.462 clat (usec): min=1421, max=24165, avg=12763.28, stdev=4263.13 00:12:39.462 lat (usec): min=4645, max=24179, avg=12862.36, stdev=4281.04 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 7308], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9372], 00:12:39.462 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11207], 60.00th=[12518], 00:12:39.462 | 70.00th=[14091], 80.00th=[16319], 90.00th=[19530], 95.00th=[22676], 00:12:39.462 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24249], 99.95th=[24249], 00:12:39.462 | 99.99th=[24249] 00:12:39.462 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:39.462 slat (usec): min=3, max=5246, avg=98.41, stdev=397.32 00:12:39.462 clat (usec): min=5647, max=24416, avg=13219.59, stdev=5349.13 00:12:39.462 lat (usec): min=6051, max=24429, avg=13317.99, stdev=5376.92 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8717], 00:12:39.462 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[13435], 00:12:39.462 | 70.00th=[16909], 80.00th=[19006], 90.00th=[22414], 95.00th=[22676], 00:12:39.462 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24249], 99.95th=[24511], 00:12:39.462 | 99.99th=[24511] 00:12:39.462 bw ( KiB/s): min=19800, max=20480, per=23.53%, avg=20140.00, stdev=480.83, samples=2 00:12:39.462 iops : min= 4950, max= 5120, avg=5035.00, stdev=120.21, samples=2 00:12:39.462 lat (msec) : 2=0.01%, 10=39.23%, 20=47.40%, 50=13.37% 00:12:39.462 cpu : usr=4.58%, sys=7.57%, ctx=866, majf=0, minf=1 00:12:39.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:39.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.462 issued rwts: total=4651,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.462 job3: (groupid=0, jobs=1): err= 0: pid=513388: Tue May 14 23:59:08 2024 00:12:39.462 read: IOPS=5357, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1002msec) 00:12:39.462 slat (usec): min=3, max=5883, avg=92.14, stdev=409.75 00:12:39.462 clat (usec): min=1275, max=23451, avg=12382.84, stdev=4481.03 00:12:39.462 lat (usec): min=1280, max=23461, avg=12474.98, stdev=4503.31 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 4228], 5.00th=[ 7242], 10.00th=[ 7767], 20.00th=[ 8291], 00:12:39.462 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[11338], 60.00th=[12518], 00:12:39.462 | 70.00th=[14877], 80.00th=[16712], 90.00th=[18482], 95.00th=[21890], 00:12:39.462 | 99.00th=[22676], 99.50th=[22938], 99.90th=[23462], 99.95th=[23462], 00:12:39.462 | 99.99th=[23462] 00:12:39.462 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:39.462 slat (usec): min=3, max=5065, avg=81.43, stdev=331.78 00:12:39.462 clat (usec): min=4026, max=21644, avg=10651.23, stdev=3186.49 00:12:39.462 lat (usec): min=4031, max=21662, avg=10732.66, stdev=3206.82 00:12:39.462 clat percentiles (usec): 00:12:39.462 | 1.00th=[ 6849], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 7898], 00:12:39.462 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10683], 00:12:39.462 | 70.00th=[11994], 80.00th=[13829], 90.00th=[15926], 95.00th=[16712], 00:12:39.462 | 99.00th=[17957], 99.50th=[20317], 99.90th=[21365], 99.95th=[21365], 00:12:39.462 | 99.99th=[21627] 00:12:39.462 bw ( KiB/s): min=20480, max=24576, per=26.32%, avg=22528.00, stdev=2896.31, samples=2 00:12:39.462 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:12:39.462 lat (msec) : 2=0.15%, 4=0.23%, 10=46.96%, 20=48.82%, 50=3.85% 00:12:39.462 cpu : usr=6.49%, sys=8.19%, ctx=892, majf=0, minf=1 00:12:39.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:39.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.462 issued rwts: total=5368,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.462 00:12:39.462 Run status group 0 (all jobs): 00:12:39.462 READ: bw=80.3MiB/s (84.2MB/s), 18.1MiB/s-21.7MiB/s (19.0MB/s-22.7MB/s), io=80.7MiB (84.6MB), run=1002-1005msec 00:12:39.462 WRITE: bw=83.6MiB/s (87.6MB/s), 19.9MiB/s-22.0MiB/s (20.9MB/s-23.0MB/s), io=84.0MiB (88.1MB), run=1002-1005msec 00:12:39.462 00:12:39.462 Disk stats (read/write): 00:12:39.462 nvme0n1: ios=4184/4608, merge=0/0, ticks=12150/15172, in_queue=27322, util=86.27% 00:12:39.462 nvme0n2: ios=5149/5184, merge=0/0, ticks=14310/14853, in_queue=29163, util=87.02% 00:12:39.463 nvme0n3: ios=4234/4608, merge=0/0, ticks=12837/14114, in_queue=26951, util=88.76% 00:12:39.463 nvme0n4: ios=4141/4608, merge=0/0, ticks=13945/13228, in_queue=27173, util=89.62% 00:12:39.463 23:59:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:39.463 23:59:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=513528 00:12:39.463 23:59:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:39.463 23:59:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:39.463 [global] 00:12:39.463 thread=1 00:12:39.463 invalidate=1 00:12:39.463 rw=read 00:12:39.463 time_based=1 00:12:39.463 runtime=10 00:12:39.463 ioengine=libaio 00:12:39.463 direct=1 00:12:39.463 bs=4096 00:12:39.463 iodepth=1 00:12:39.463 norandommap=1 00:12:39.463 numjobs=1 00:12:39.463 00:12:39.463 [job0] 00:12:39.463 filename=/dev/nvme0n1 00:12:39.463 [job1] 00:12:39.463 filename=/dev/nvme0n2 00:12:39.463 [job2] 00:12:39.463 filename=/dev/nvme0n3 00:12:39.463 [job3] 00:12:39.463 filename=/dev/nvme0n4 00:12:39.463 Could not set queue depth (nvme0n1) 00:12:39.463 Could not set queue depth (nvme0n2) 00:12:39.463 Could not set queue depth (nvme0n3) 00:12:39.463 Could not set queue depth (nvme0n4) 00:12:39.463 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.463 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.463 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.463 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.463 fio-3.35 00:12:39.463 Starting 4 threads 00:12:42.740 23:59:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:42.740 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=77832192, buflen=4096 00:12:42.740 fio: pid=513628, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:42.740 23:59:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:42.740 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=90411008, buflen=4096 00:12:42.740 fio: pid=513627, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:42.740 23:59:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:42.740 23:59:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:42.999 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=44822528, buflen=4096 00:12:42.999 fio: pid=513623, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:42.999 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:42.999 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:43.256 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=52973568, buflen=4096 00:12:43.256 fio: pid=513624, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:43.256 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:43.256 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:43.256 00:12:43.256 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=513623: Tue May 14 23:59:12 2024 00:12:43.256 read: IOPS=8002, BW=31.3MiB/s (32.8MB/s)(107MiB/3415msec) 00:12:43.257 slat (usec): min=4, max=15872, avg=10.18, stdev=155.02 00:12:43.257 clat (usec): min=58, max=347, avg=113.26, stdev=38.66 00:12:43.257 lat (usec): min=65, max=16024, avg=123.43, stdev=160.37 00:12:43.257 clat percentiles (usec): 00:12:43.257 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 84], 00:12:43.257 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 98], 60.00th=[ 105], 00:12:43.257 | 70.00th=[ 128], 80.00th=[ 145], 90.00th=[ 176], 95.00th=[ 194], 00:12:43.257 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 310], 99.95th=[ 322], 00:12:43.257 | 99.99th=[ 343] 00:12:43.257 bw ( KiB/s): min=22936, max=38672, per=29.41%, avg=31002.67, stdev=7097.50, samples=6 00:12:43.257 iops : min= 5734, max= 9668, avg=7750.67, stdev=1774.38, samples=6 00:12:43.257 lat (usec) : 100=54.41%, 250=45.05%, 500=0.55% 00:12:43.257 cpu : usr=2.34%, sys=7.47%, ctx=27332, majf=0, minf=1 00:12:43.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 issued rwts: total=27328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.257 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=513624: Tue May 14 23:59:12 2024 00:12:43.257 read: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(115MiB/3708msec) 00:12:43.257 slat (usec): min=4, max=14912, avg=10.53, stdev=154.06 00:12:43.257 clat (usec): min=56, max=569, avg=114.40, stdev=42.07 00:12:43.257 lat (usec): min=62, max=15123, avg=124.93, stdev=160.73 00:12:43.257 clat percentiles (usec): 00:12:43.257 | 1.00th=[ 62], 5.00th=[ 67], 10.00th=[ 74], 20.00th=[ 81], 00:12:43.257 | 30.00th=[ 86], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 109], 00:12:43.257 | 70.00th=[ 141], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 188], 00:12:43.257 | 99.00th=[ 225], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 334], 00:12:43.257 | 99.99th=[ 449] 00:12:43.257 bw ( KiB/s): min=23376, max=38248, per=29.36%, avg=30945.86, stdev=5678.09, samples=7 00:12:43.257 iops : min= 5844, max= 9562, avg=7736.43, stdev=1419.48, samples=7 00:12:43.257 lat (usec) : 100=55.32%, 250=44.17%, 500=0.50%, 750=0.01% 00:12:43.257 cpu : usr=2.62%, sys=7.31%, ctx=29324, majf=0, minf=1 00:12:43.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 issued rwts: total=29318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.257 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=513627: Tue May 14 23:59:12 2024 00:12:43.257 read: IOPS=7016, BW=27.4MiB/s (28.7MB/s)(86.2MiB/3146msec) 00:12:43.257 slat (usec): min=4, max=11858, avg= 9.93, stdev=95.53 00:12:43.257 clat (usec): min=75, max=562, avg=130.93, stdev=36.23 00:12:43.257 lat (usec): min=82, max=12035, avg=140.86, stdev=102.52 00:12:43.257 clat percentiles (usec): 00:12:43.257 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 100], 00:12:43.257 | 30.00th=[ 105], 40.00th=[ 111], 50.00th=[ 118], 60.00th=[ 131], 00:12:43.257 | 70.00th=[ 151], 80.00th=[ 165], 90.00th=[ 186], 95.00th=[ 198], 00:12:43.257 | 99.00th=[ 221], 99.50th=[ 239], 99.90th=[ 285], 99.95th=[ 314], 00:12:43.257 | 99.99th=[ 359] 00:12:43.257 bw ( KiB/s): min=24824, max=34104, per=26.86%, avg=28309.33, stdev=3259.93, samples=6 00:12:43.257 iops : min= 6206, max= 8526, avg=7077.33, stdev=814.98, samples=6 00:12:43.257 lat (usec) : 100=20.81%, 250=78.83%, 500=0.35%, 750=0.01% 00:12:43.257 cpu : usr=2.70%, sys=6.74%, ctx=22077, majf=0, minf=1 00:12:43.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 issued rwts: total=22074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.257 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=513628: Tue May 14 23:59:12 2024 00:12:43.257 read: IOPS=6586, BW=25.7MiB/s (27.0MB/s)(74.2MiB/2885msec) 00:12:43.257 slat (nsec): min=4684, max=46106, avg=9278.44, stdev=3930.52 00:12:43.257 clat (usec): min=82, max=335, avg=140.84, stdev=36.39 00:12:43.257 lat (usec): min=87, max=359, avg=150.11, stdev=37.26 00:12:43.257 clat percentiles (usec): 00:12:43.257 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 98], 20.00th=[ 104], 00:12:43.257 | 30.00th=[ 111], 40.00th=[ 130], 50.00th=[ 143], 60.00th=[ 149], 00:12:43.257 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 190], 95.00th=[ 200], 00:12:43.257 | 99.00th=[ 239], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 326], 00:12:43.257 | 99.99th=[ 334] 00:12:43.257 bw ( KiB/s): min=21984, max=28240, per=24.02%, avg=25316.80, stdev=2735.28, samples=5 00:12:43.257 iops : min= 5496, max= 7060, avg=6329.20, stdev=683.82, samples=5 00:12:43.257 lat (usec) : 100=13.77%, 250=85.49%, 500=0.74% 00:12:43.257 cpu : usr=2.22%, sys=6.93%, ctx=19003, majf=0, minf=1 00:12:43.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.257 issued rwts: total=19003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.257 00:12:43.257 Run status group 0 (all jobs): 00:12:43.257 READ: bw=103MiB/s (108MB/s), 25.7MiB/s-31.3MiB/s (27.0MB/s-32.8MB/s), io=382MiB (400MB), run=2885-3708msec 00:12:43.257 00:12:43.257 Disk stats (read/write): 00:12:43.257 nvme0n1: ios=26742/0, merge=0/0, ticks=2967/0, in_queue=2967, util=94.68% 00:12:43.257 nvme0n2: ios=27998/0, merge=0/0, ticks=3177/0, in_queue=3177, util=95.07% 00:12:43.257 nvme0n3: ios=21836/0, merge=0/0, ticks=2785/0, in_queue=2785, util=96.23% 00:12:43.257 nvme0n4: ios=18747/0, merge=0/0, ticks=2611/0, in_queue=2611, util=96.72% 00:12:43.515 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:43.515 23:59:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:43.772 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:43.772 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:44.030 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:44.030 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:44.287 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:44.287 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:44.545 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:44.545 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 513528 00:12:44.545 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:44.545 23:59:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:47.068 nvmf hotplug test: fio failed as expected 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:47.068 rmmod nvme_rdma 00:12:47.068 rmmod nvme_fabrics 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 511212 ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 511212 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 511212 ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 511212 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.068 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 511212 00:12:47.326 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.326 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.326 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 511212' 00:12:47.326 killing process with pid 511212 00:12:47.326 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 511212 00:12:47.326 [2024-05-14 23:59:16.435590] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:47.326 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 511212 00:12:47.326 [2024-05-14 23:59:16.526095] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:47.584 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.584 23:59:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:47.584 00:12:47.584 real 0m27.075s 00:12:47.584 user 1m44.091s 00:12:47.584 sys 0m7.070s 00:12:47.584 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:47.584 23:59:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.584 ************************************ 00:12:47.584 END TEST nvmf_fio_target 00:12:47.584 ************************************ 00:12:47.584 23:59:16 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:47.584 23:59:16 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:47.584 23:59:16 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.584 23:59:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:47.584 ************************************ 00:12:47.584 START TEST nvmf_bdevio 00:12:47.584 ************************************ 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:47.584 * Looking for test storage... 00:12:47.584 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.584 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.585 23:59:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.131 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:50.132 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:50.132 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:50.132 Found net devices under 0000:09:00.0: mlx_0_0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:50.132 Found net devices under 0000:09:00.1: mlx_0_1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:50.132 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:50.132 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:50.132 altname enp9s0f0np0 00:12:50.132 inet 192.168.100.8/24 scope global mlx_0_0 00:12:50.132 valid_lft forever preferred_lft forever 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:50.132 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:50.132 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:50.132 altname enp9s0f1np1 00:12:50.132 inet 192.168.100.9/24 scope global mlx_0_1 00:12:50.132 valid_lft forever preferred_lft forever 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:50.132 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:50.133 192.168.100.9' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:50.133 192.168.100.9' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:50.133 192.168.100.9' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:50.133 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:50.391 23:59:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=516669 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 516669 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 516669 ']' 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:50.392 23:59:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.392 [2024-05-14 23:59:19.514464] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:12:50.392 [2024-05-14 23:59:19.514549] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.392 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.392 [2024-05-14 23:59:19.588342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.392 [2024-05-14 23:59:19.705563] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.392 [2024-05-14 23:59:19.705628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.392 [2024-05-14 23:59:19.705644] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.392 [2024-05-14 23:59:19.705658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.392 [2024-05-14 23:59:19.705669] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.392 [2024-05-14 23:59:19.705789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.392 [2024-05-14 23:59:19.705846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:50.392 [2024-05-14 23:59:19.705900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:50.392 [2024-05-14 23:59:19.705903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.322 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.322 [2024-05-14 23:59:20.520393] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaaf300/0xab37f0) succeed. 00:12:51.322 [2024-05-14 23:59:20.531359] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xab0940/0xaf4e80) succeed. 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 Malloc0 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.579 [2024-05-14 23:59:20.719555] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:51.579 [2024-05-14 23:59:20.719844] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.579 { 00:12:51.579 "params": { 00:12:51.579 "name": "Nvme$subsystem", 00:12:51.579 "trtype": "$TEST_TRANSPORT", 00:12:51.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.579 "adrfam": "ipv4", 00:12:51.579 "trsvcid": "$NVMF_PORT", 00:12:51.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.579 "hdgst": ${hdgst:-false}, 00:12:51.579 "ddgst": ${ddgst:-false} 00:12:51.579 }, 00:12:51.579 "method": "bdev_nvme_attach_controller" 00:12:51.579 } 00:12:51.579 EOF 00:12:51.579 )") 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:51.579 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:51.580 23:59:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.580 "params": { 00:12:51.580 "name": "Nvme1", 00:12:51.580 "trtype": "rdma", 00:12:51.580 "traddr": "192.168.100.8", 00:12:51.580 "adrfam": "ipv4", 00:12:51.580 "trsvcid": "4420", 00:12:51.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.580 "hdgst": false, 00:12:51.580 "ddgst": false 00:12:51.580 }, 00:12:51.580 "method": "bdev_nvme_attach_controller" 00:12:51.580 }' 00:12:51.580 [2024-05-14 23:59:20.759223] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:12:51.580 [2024-05-14 23:59:20.759307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516828 ] 00:12:51.580 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.580 [2024-05-14 23:59:20.830241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.837 [2024-05-14 23:59:20.947441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.837 [2024-05-14 23:59:20.947490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.837 [2024-05-14 23:59:20.947493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.837 I/O targets: 00:12:51.837 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:51.837 00:12:51.837 00:12:51.837 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.837 http://cunit.sourceforge.net/ 00:12:51.837 00:12:51.837 00:12:51.837 Suite: bdevio tests on: Nvme1n1 00:12:51.837 Test: blockdev write read block ...passed 00:12:51.837 Test: blockdev write zeroes read block ...passed 00:12:51.837 Test: blockdev write zeroes read no split ...passed 00:12:51.838 Test: blockdev write zeroes read split ...passed 00:12:51.838 Test: blockdev write zeroes read split partial ...passed 00:12:51.838 Test: blockdev reset ...[2024-05-14 23:59:21.178702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:52.096 [2024-05-14 23:59:21.203803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:52.096 [2024-05-14 23:59:21.229068] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:52.096 passed 00:12:52.096 Test: blockdev write read 8 blocks ...passed 00:12:52.096 Test: blockdev write read size > 128k ...passed 00:12:52.096 Test: blockdev write read invalid size ...passed 00:12:52.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.096 Test: blockdev write read max offset ...passed 00:12:52.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.096 Test: blockdev writev readv 8 blocks ...passed 00:12:52.096 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.096 Test: blockdev writev readv block ...passed 00:12:52.096 Test: blockdev writev readv size > 128k ...passed 00:12:52.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.096 Test: blockdev comparev and writev ...[2024-05-14 23:59:21.232892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.232927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.232957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.232973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.233729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.096 [2024-05-14 23:59:21.233743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:52.096 passed 00:12:52.096 Test: blockdev nvme passthru rw ...passed 00:12:52.096 Test: blockdev nvme passthru vendor specific ...[2024-05-14 23:59:21.234150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:52.096 [2024-05-14 23:59:21.234175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.234242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:52.096 [2024-05-14 23:59:21.234261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.234317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:52.096 [2024-05-14 23:59:21.234336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:52.096 [2024-05-14 23:59:21.234392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:52.096 [2024-05-14 23:59:21.234411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:52.096 passed 00:12:52.096 Test: blockdev nvme admin passthru ...passed 00:12:52.096 Test: blockdev copy ...passed 00:12:52.096 00:12:52.096 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.096 suites 1 1 n/a 0 0 00:12:52.096 tests 23 23 23 0 0 00:12:52.096 asserts 152 152 152 0 n/a 00:12:52.096 00:12:52.096 Elapsed time = 0.180 seconds 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:52.354 rmmod nvme_rdma 00:12:52.354 rmmod nvme_fabrics 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 516669 ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 516669 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 516669 ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 516669 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 516669 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 516669' 00:12:52.354 killing process with pid 516669 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 516669 00:12:52.354 [2024-05-14 23:59:21.564767] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:52.354 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 516669 00:12:52.354 [2024-05-14 23:59:21.653790] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:52.612 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.612 23:59:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:52.612 00:12:52.612 real 0m5.085s 00:12:52.612 user 0m10.920s 00:12:52.612 sys 0m2.455s 00:12:52.612 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.612 23:59:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.612 ************************************ 00:12:52.612 END TEST nvmf_bdevio 00:12:52.612 ************************************ 00:12:52.871 23:59:21 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:52.871 23:59:21 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.871 23:59:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.871 23:59:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:52.871 ************************************ 00:12:52.871 START TEST nvmf_auth_target 00:12:52.871 ************************************ 00:12:52.871 23:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:52.871 * Looking for test storage... 00:12:52.871 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.871 23:59:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.872 23:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:55.396 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:55.396 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:55.396 Found net devices under 0000:09:00.0: mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:55.396 Found net devices under 0000:09:00.1: mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:55.396 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.396 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:55.396 altname enp9s0f0np0 00:12:55.396 inet 192.168.100.8/24 scope global mlx_0_0 00:12:55.396 valid_lft forever preferred_lft forever 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:55.396 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:55.396 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:55.396 altname enp9s0f1np1 00:12:55.396 inet 192.168.100.9/24 scope global mlx_0_1 00:12:55.396 valid_lft forever preferred_lft forever 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:55.396 192.168.100.9' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:55.396 192.168.100.9' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:55.396 192.168.100.9' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=518920 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 518920 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 518920 ']' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:55.396 23:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=519072 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be2bb590f2e4875bccb104e09fd66485b5507a66957f3a15 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ayq 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be2bb590f2e4875bccb104e09fd66485b5507a66957f3a15 0 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be2bb590f2e4875bccb104e09fd66485b5507a66957f3a15 0 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be2bb590f2e4875bccb104e09fd66485b5507a66957f3a15 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ayq 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ayq 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.ayq 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cbc070b0c17275a36b2dc20bfdb44397 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XsA 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cbc070b0c17275a36b2dc20bfdb44397 1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cbc070b0c17275a36b2dc20bfdb44397 1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cbc070b0c17275a36b2dc20bfdb44397 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XsA 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XsA 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.XsA 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d05647185cc3f37b056ab7172be658703a0986b63a8bac4d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.32d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d05647185cc3f37b056ab7172be658703a0986b63a8bac4d 2 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d05647185cc3f37b056ab7172be658703a0986b63a8bac4d 2 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d05647185cc3f37b056ab7172be658703a0986b63a8bac4d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.32d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.32d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.32d 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d65887de4f8941720f5beca75ea3aa87a4418ddbf1679907740b201afa3d7a64 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yfU 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d65887de4f8941720f5beca75ea3aa87a4418ddbf1679907740b201afa3d7a64 3 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d65887de4f8941720f5beca75ea3aa87a4418ddbf1679907740b201afa3d7a64 3 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d65887de4f8941720f5beca75ea3aa87a4418ddbf1679907740b201afa3d7a64 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yfU 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yfU 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.yfU 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 518920 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 518920 ']' 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.765 23:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 519072 /var/tmp/host.sock 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 519072 ']' 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.023 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:57.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:57.024 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.024 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.281 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.281 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ayq 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ayq 00:12:57.282 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ayq 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XsA 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.XsA 00:12:57.538 23:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.XsA 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.32d 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.32d 00:12:57.796 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.32d 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yfU 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yfU 00:12:58.053 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yfU 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:58.620 23:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:59.187 00:12:59.187 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:12:59.187 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:12:59.187 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:12:59.445 { 00:12:59.445 "cntlid": 1, 00:12:59.445 "qid": 0, 00:12:59.445 "state": "enabled", 00:12:59.445 "listen_address": { 00:12:59.445 "trtype": "RDMA", 00:12:59.445 "adrfam": "IPv4", 00:12:59.445 "traddr": "192.168.100.8", 00:12:59.445 "trsvcid": "4420" 00:12:59.445 }, 00:12:59.445 "peer_address": { 00:12:59.445 "trtype": "RDMA", 00:12:59.445 "adrfam": "IPv4", 00:12:59.445 "traddr": "192.168.100.8", 00:12:59.445 "trsvcid": "49284" 00:12:59.445 }, 00:12:59.445 "auth": { 00:12:59.445 "state": "completed", 00:12:59.445 "digest": "sha256", 00:12:59.445 "dhgroup": "null" 00:12:59.445 } 00:12:59.445 } 00:12:59.445 ]' 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.445 23:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.010 23:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:00.945 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:01.203 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:01.461 23:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:02.026 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:02.026 { 00:13:02.026 "cntlid": 3, 00:13:02.026 "qid": 0, 00:13:02.026 "state": "enabled", 00:13:02.026 "listen_address": { 00:13:02.026 "trtype": "RDMA", 00:13:02.026 "adrfam": "IPv4", 00:13:02.026 "traddr": "192.168.100.8", 00:13:02.026 "trsvcid": "4420" 00:13:02.026 }, 00:13:02.026 "peer_address": { 00:13:02.026 "trtype": "RDMA", 00:13:02.026 "adrfam": "IPv4", 00:13:02.026 "traddr": "192.168.100.8", 00:13:02.026 "trsvcid": "36241" 00:13:02.026 }, 00:13:02.026 "auth": { 00:13:02.026 "state": "completed", 00:13:02.026 "digest": "sha256", 00:13:02.026 "dhgroup": "null" 00:13:02.026 } 00:13:02.026 } 00:13:02.026 ]' 00:13:02.026 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.283 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.540 23:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:03.911 23:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:03.911 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.169 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.761 00:13:04.761 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:04.761 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:04.761 23:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:05.018 { 00:13:05.018 "cntlid": 5, 00:13:05.018 "qid": 0, 00:13:05.018 "state": "enabled", 00:13:05.018 "listen_address": { 00:13:05.018 "trtype": "RDMA", 00:13:05.018 "adrfam": "IPv4", 00:13:05.018 "traddr": "192.168.100.8", 00:13:05.018 "trsvcid": "4420" 00:13:05.018 }, 00:13:05.018 "peer_address": { 00:13:05.018 "trtype": "RDMA", 00:13:05.018 "adrfam": "IPv4", 00:13:05.018 "traddr": "192.168.100.8", 00:13:05.018 "trsvcid": "40386" 00:13:05.018 }, 00:13:05.018 "auth": { 00:13:05.018 "state": "completed", 00:13:05.018 "digest": "sha256", 00:13:05.018 "dhgroup": "null" 00:13:05.018 } 00:13:05.018 } 00:13:05.018 ]' 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.018 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.274 23:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:06.645 23:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:06.646 23:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.903 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.468 00:13:07.468 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:07.468 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:07.468 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:07.724 { 00:13:07.724 "cntlid": 7, 00:13:07.724 "qid": 0, 00:13:07.724 "state": "enabled", 00:13:07.724 "listen_address": { 00:13:07.724 "trtype": "RDMA", 00:13:07.724 "adrfam": "IPv4", 00:13:07.724 "traddr": "192.168.100.8", 00:13:07.724 "trsvcid": "4420" 00:13:07.724 }, 00:13:07.724 "peer_address": { 00:13:07.724 "trtype": "RDMA", 00:13:07.724 "adrfam": "IPv4", 00:13:07.724 "traddr": "192.168.100.8", 00:13:07.724 "trsvcid": "38986" 00:13:07.724 }, 00:13:07.724 "auth": { 00:13:07.724 "state": "completed", 00:13:07.724 "digest": "sha256", 00:13:07.724 "dhgroup": "null" 00:13:07.724 } 00:13:07.724 } 00:13:07.724 ]' 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.724 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.725 23:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.981 23:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.354 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:09.611 23:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:10.177 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:10.177 { 00:13:10.177 "cntlid": 9, 00:13:10.177 "qid": 0, 00:13:10.177 "state": "enabled", 00:13:10.177 "listen_address": { 00:13:10.177 "trtype": "RDMA", 00:13:10.177 "adrfam": "IPv4", 00:13:10.177 "traddr": "192.168.100.8", 00:13:10.177 "trsvcid": "4420" 00:13:10.177 }, 00:13:10.177 "peer_address": { 00:13:10.177 "trtype": "RDMA", 00:13:10.177 "adrfam": "IPv4", 00:13:10.177 "traddr": "192.168.100.8", 00:13:10.177 "trsvcid": "58738" 00:13:10.177 }, 00:13:10.177 "auth": { 00:13:10.177 "state": "completed", 00:13:10.177 "digest": "sha256", 00:13:10.177 "dhgroup": "ffdhe2048" 00:13:10.177 } 00:13:10.177 } 00:13:10.177 ]' 00:13:10.177 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.435 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.693 23:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.066 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:12.323 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:12.580 00:13:12.581 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:12.581 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:12.581 23:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:12.838 { 00:13:12.838 "cntlid": 11, 00:13:12.838 "qid": 0, 00:13:12.838 "state": "enabled", 00:13:12.838 "listen_address": { 00:13:12.838 "trtype": "RDMA", 00:13:12.838 "adrfam": "IPv4", 00:13:12.838 "traddr": "192.168.100.8", 00:13:12.838 "trsvcid": "4420" 00:13:12.838 }, 00:13:12.838 "peer_address": { 00:13:12.838 "trtype": "RDMA", 00:13:12.838 "adrfam": "IPv4", 00:13:12.838 "traddr": "192.168.100.8", 00:13:12.838 "trsvcid": "35502" 00:13:12.838 }, 00:13:12.838 "auth": { 00:13:12.838 "state": "completed", 00:13:12.838 "digest": "sha256", 00:13:12.838 "dhgroup": "ffdhe2048" 00:13:12.838 } 00:13:12.838 } 00:13:12.838 ]' 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.838 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:13.096 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.096 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:13.096 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.096 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.096 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.353 23:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:14.742 23:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:15.000 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:15.258 00:13:15.258 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:15.258 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:15.258 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:15.516 { 00:13:15.516 "cntlid": 13, 00:13:15.516 "qid": 0, 00:13:15.516 "state": "enabled", 00:13:15.516 "listen_address": { 00:13:15.516 "trtype": "RDMA", 00:13:15.516 "adrfam": "IPv4", 00:13:15.516 "traddr": "192.168.100.8", 00:13:15.516 "trsvcid": "4420" 00:13:15.516 }, 00:13:15.516 "peer_address": { 00:13:15.516 "trtype": "RDMA", 00:13:15.516 "adrfam": "IPv4", 00:13:15.516 "traddr": "192.168.100.8", 00:13:15.516 "trsvcid": "55109" 00:13:15.516 }, 00:13:15.516 "auth": { 00:13:15.516 "state": "completed", 00:13:15.516 "digest": "sha256", 00:13:15.516 "dhgroup": "ffdhe2048" 00:13:15.516 } 00:13:15.516 } 00:13:15.516 ]' 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.516 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:15.773 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:15.773 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:15.773 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.773 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.773 23:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.031 23:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:17.402 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.660 23:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.917 00:13:17.917 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:17.917 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:17.917 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.174 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:18.174 { 00:13:18.174 "cntlid": 15, 00:13:18.174 "qid": 0, 00:13:18.174 "state": "enabled", 00:13:18.174 "listen_address": { 00:13:18.174 "trtype": "RDMA", 00:13:18.174 "adrfam": "IPv4", 00:13:18.174 "traddr": "192.168.100.8", 00:13:18.174 "trsvcid": "4420" 00:13:18.174 }, 00:13:18.174 "peer_address": { 00:13:18.174 "trtype": "RDMA", 00:13:18.174 "adrfam": "IPv4", 00:13:18.174 "traddr": "192.168.100.8", 00:13:18.174 "trsvcid": "46633" 00:13:18.174 }, 00:13:18.174 "auth": { 00:13:18.174 "state": "completed", 00:13:18.174 "digest": "sha256", 00:13:18.174 "dhgroup": "ffdhe2048" 00:13:18.174 } 00:13:18.174 } 00:13:18.175 ]' 00:13:18.175 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:18.175 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.175 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:18.432 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:18.432 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:18.432 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.432 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.432 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.707 23:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:20.093 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:20.351 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:20.608 00:13:20.865 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:20.865 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:20.865 23:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:21.124 { 00:13:21.124 "cntlid": 17, 00:13:21.124 "qid": 0, 00:13:21.124 "state": "enabled", 00:13:21.124 "listen_address": { 00:13:21.124 "trtype": "RDMA", 00:13:21.124 "adrfam": "IPv4", 00:13:21.124 "traddr": "192.168.100.8", 00:13:21.124 "trsvcid": "4420" 00:13:21.124 }, 00:13:21.124 "peer_address": { 00:13:21.124 "trtype": "RDMA", 00:13:21.124 "adrfam": "IPv4", 00:13:21.124 "traddr": "192.168.100.8", 00:13:21.124 "trsvcid": "41129" 00:13:21.124 }, 00:13:21.124 "auth": { 00:13:21.124 "state": "completed", 00:13:21.124 "digest": "sha256", 00:13:21.124 "dhgroup": "ffdhe3072" 00:13:21.124 } 00:13:21.124 } 00:13:21.124 ]' 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.124 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.383 23:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:22.755 23:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:22.756 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.013 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:23.014 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.014 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.014 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.014 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:23.014 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:23.578 00:13:23.578 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.579 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.836 23:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.836 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:23.836 { 00:13:23.836 "cntlid": 19, 00:13:23.836 "qid": 0, 00:13:23.836 "state": "enabled", 00:13:23.836 "listen_address": { 00:13:23.836 "trtype": "RDMA", 00:13:23.836 "adrfam": "IPv4", 00:13:23.836 "traddr": "192.168.100.8", 00:13:23.836 "trsvcid": "4420" 00:13:23.836 }, 00:13:23.836 "peer_address": { 00:13:23.836 "trtype": "RDMA", 00:13:23.836 "adrfam": "IPv4", 00:13:23.836 "traddr": "192.168.100.8", 00:13:23.836 "trsvcid": "47518" 00:13:23.836 }, 00:13:23.836 "auth": { 00:13:23.836 "state": "completed", 00:13:23.836 "digest": "sha256", 00:13:23.836 "dhgroup": "ffdhe3072" 00:13:23.836 } 00:13:23.836 } 00:13:23.836 ]' 00:13:23.836 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:23.836 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.836 23:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:23.836 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:23.836 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:23.836 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.836 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.836 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.094 23:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:25.465 23:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:25.724 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:26.289 00:13:26.289 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:26.289 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:26.289 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.289 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:26.547 { 00:13:26.547 "cntlid": 21, 00:13:26.547 "qid": 0, 00:13:26.547 "state": "enabled", 00:13:26.547 "listen_address": { 00:13:26.547 "trtype": "RDMA", 00:13:26.547 "adrfam": "IPv4", 00:13:26.547 "traddr": "192.168.100.8", 00:13:26.547 "trsvcid": "4420" 00:13:26.547 }, 00:13:26.547 "peer_address": { 00:13:26.547 "trtype": "RDMA", 00:13:26.547 "adrfam": "IPv4", 00:13:26.547 "traddr": "192.168.100.8", 00:13:26.547 "trsvcid": "41379" 00:13:26.547 }, 00:13:26.547 "auth": { 00:13:26.547 "state": "completed", 00:13:26.547 "digest": "sha256", 00:13:26.547 "dhgroup": "ffdhe3072" 00:13:26.547 } 00:13:26.547 } 00:13:26.547 ]' 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.547 23:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.804 23:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:28.176 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.434 23:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.999 00:13:28.999 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:28.999 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:28.999 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.255 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:29.255 { 00:13:29.255 "cntlid": 23, 00:13:29.255 "qid": 0, 00:13:29.255 "state": "enabled", 00:13:29.255 "listen_address": { 00:13:29.255 "trtype": "RDMA", 00:13:29.255 "adrfam": "IPv4", 00:13:29.255 "traddr": "192.168.100.8", 00:13:29.255 "trsvcid": "4420" 00:13:29.255 }, 00:13:29.255 "peer_address": { 00:13:29.255 "trtype": "RDMA", 00:13:29.255 "adrfam": "IPv4", 00:13:29.255 "traddr": "192.168.100.8", 00:13:29.255 "trsvcid": "47698" 00:13:29.255 }, 00:13:29.256 "auth": { 00:13:29.256 "state": "completed", 00:13:29.256 "digest": "sha256", 00:13:29.256 "dhgroup": "ffdhe3072" 00:13:29.256 } 00:13:29.256 } 00:13:29.256 ]' 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.256 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.512 23:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:13:30.884 23:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:30.884 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:31.142 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:31.708 00:13:31.708 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:31.708 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:31.708 00:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:31.966 { 00:13:31.966 "cntlid": 25, 00:13:31.966 "qid": 0, 00:13:31.966 "state": "enabled", 00:13:31.966 "listen_address": { 00:13:31.966 "trtype": "RDMA", 00:13:31.966 "adrfam": "IPv4", 00:13:31.966 "traddr": "192.168.100.8", 00:13:31.966 "trsvcid": "4420" 00:13:31.966 }, 00:13:31.966 "peer_address": { 00:13:31.966 "trtype": "RDMA", 00:13:31.966 "adrfam": "IPv4", 00:13:31.966 "traddr": "192.168.100.8", 00:13:31.966 "trsvcid": "49804" 00:13:31.966 }, 00:13:31.966 "auth": { 00:13:31.966 "state": "completed", 00:13:31.966 "digest": "sha256", 00:13:31.966 "dhgroup": "ffdhe4096" 00:13:31.966 } 00:13:31.966 } 00:13:31.966 ]' 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.966 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.224 00:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.604 00:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:33.876 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:34.133 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.390 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:34.648 { 00:13:34.648 "cntlid": 27, 00:13:34.648 "qid": 0, 00:13:34.648 "state": "enabled", 00:13:34.648 "listen_address": { 00:13:34.648 "trtype": "RDMA", 00:13:34.648 "adrfam": "IPv4", 00:13:34.648 "traddr": "192.168.100.8", 00:13:34.648 "trsvcid": "4420" 00:13:34.648 }, 00:13:34.648 "peer_address": { 00:13:34.648 "trtype": "RDMA", 00:13:34.648 "adrfam": "IPv4", 00:13:34.648 "traddr": "192.168.100.8", 00:13:34.648 "trsvcid": "47992" 00:13:34.648 }, 00:13:34.648 "auth": { 00:13:34.648 "state": "completed", 00:13:34.648 "digest": "sha256", 00:13:34.648 "dhgroup": "ffdhe4096" 00:13:34.648 } 00:13:34.648 } 00:13:34.648 ]' 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.648 00:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.906 00:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.277 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:36.535 00:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:37.099 00:13:37.099 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:37.099 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:37.099 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:37.357 { 00:13:37.357 "cntlid": 29, 00:13:37.357 "qid": 0, 00:13:37.357 "state": "enabled", 00:13:37.357 "listen_address": { 00:13:37.357 "trtype": "RDMA", 00:13:37.357 "adrfam": "IPv4", 00:13:37.357 "traddr": "192.168.100.8", 00:13:37.357 "trsvcid": "4420" 00:13:37.357 }, 00:13:37.357 "peer_address": { 00:13:37.357 "trtype": "RDMA", 00:13:37.357 "adrfam": "IPv4", 00:13:37.357 "traddr": "192.168.100.8", 00:13:37.357 "trsvcid": "39165" 00:13:37.357 }, 00:13:37.357 "auth": { 00:13:37.357 "state": "completed", 00:13:37.357 "digest": "sha256", 00:13:37.357 "dhgroup": "ffdhe4096" 00:13:37.357 } 00:13:37.357 } 00:13:37.357 ]' 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.357 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.614 00:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:38.984 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.242 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.507 00:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.507 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.507 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.779 00:13:39.779 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:39.779 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:39.779 00:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:40.048 { 00:13:40.048 "cntlid": 31, 00:13:40.048 "qid": 0, 00:13:40.048 "state": "enabled", 00:13:40.048 "listen_address": { 00:13:40.048 "trtype": "RDMA", 00:13:40.048 "adrfam": "IPv4", 00:13:40.048 "traddr": "192.168.100.8", 00:13:40.048 "trsvcid": "4420" 00:13:40.048 }, 00:13:40.048 "peer_address": { 00:13:40.048 "trtype": "RDMA", 00:13:40.048 "adrfam": "IPv4", 00:13:40.048 "traddr": "192.168.100.8", 00:13:40.048 "trsvcid": "45324" 00:13:40.048 }, 00:13:40.048 "auth": { 00:13:40.048 "state": "completed", 00:13:40.048 "digest": "sha256", 00:13:40.048 "dhgroup": "ffdhe4096" 00:13:40.048 } 00:13:40.048 } 00:13:40.048 ]' 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.048 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.312 00:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.721 00:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:41.990 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:42.566 00:13:42.566 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:42.566 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:42.566 00:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:42.828 { 00:13:42.828 "cntlid": 33, 00:13:42.828 "qid": 0, 00:13:42.828 "state": "enabled", 00:13:42.828 "listen_address": { 00:13:42.828 "trtype": "RDMA", 00:13:42.828 "adrfam": "IPv4", 00:13:42.828 "traddr": "192.168.100.8", 00:13:42.828 "trsvcid": "4420" 00:13:42.828 }, 00:13:42.828 "peer_address": { 00:13:42.828 "trtype": "RDMA", 00:13:42.828 "adrfam": "IPv4", 00:13:42.828 "traddr": "192.168.100.8", 00:13:42.828 "trsvcid": "51588" 00:13:42.828 }, 00:13:42.828 "auth": { 00:13:42.828 "state": "completed", 00:13:42.828 "digest": "sha256", 00:13:42.828 "dhgroup": "ffdhe6144" 00:13:42.828 } 00:13:42.828 } 00:13:42.828 ]' 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.828 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:42.829 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:42.829 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:43.092 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.092 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.092 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.358 00:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.753 00:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:45.016 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:45.600 00:13:45.600 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:45.600 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:45.600 00:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:45.861 { 00:13:45.861 "cntlid": 35, 00:13:45.861 "qid": 0, 00:13:45.861 "state": "enabled", 00:13:45.861 "listen_address": { 00:13:45.861 "trtype": "RDMA", 00:13:45.861 "adrfam": "IPv4", 00:13:45.861 "traddr": "192.168.100.8", 00:13:45.861 "trsvcid": "4420" 00:13:45.861 }, 00:13:45.861 "peer_address": { 00:13:45.861 "trtype": "RDMA", 00:13:45.861 "adrfam": "IPv4", 00:13:45.861 "traddr": "192.168.100.8", 00:13:45.861 "trsvcid": "51114" 00:13:45.861 }, 00:13:45.861 "auth": { 00:13:45.861 "state": "completed", 00:13:45.861 "digest": "sha256", 00:13:45.861 "dhgroup": "ffdhe6144" 00:13:45.861 } 00:13:45.861 } 00:13:45.861 ]' 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.861 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.117 00:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:47.484 00:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:47.742 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:48.356 00:13:48.356 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:48.356 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:48.356 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.613 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:48.613 { 00:13:48.613 "cntlid": 37, 00:13:48.613 "qid": 0, 00:13:48.613 "state": "enabled", 00:13:48.613 "listen_address": { 00:13:48.613 "trtype": "RDMA", 00:13:48.613 "adrfam": "IPv4", 00:13:48.613 "traddr": "192.168.100.8", 00:13:48.613 "trsvcid": "4420" 00:13:48.613 }, 00:13:48.614 "peer_address": { 00:13:48.614 "trtype": "RDMA", 00:13:48.614 "adrfam": "IPv4", 00:13:48.614 "traddr": "192.168.100.8", 00:13:48.614 "trsvcid": "50023" 00:13:48.614 }, 00:13:48.614 "auth": { 00:13:48.614 "state": "completed", 00:13:48.614 "digest": "sha256", 00:13:48.614 "dhgroup": "ffdhe6144" 00:13:48.614 } 00:13:48.614 } 00:13:48.614 ]' 00:13:48.614 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:48.614 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.614 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:48.871 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:48.871 00:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:48.871 00:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.871 00:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.871 00:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.129 00:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:50.519 00:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.776 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.341 00:13:51.341 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:51.341 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:51.341 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:51.597 { 00:13:51.597 "cntlid": 39, 00:13:51.597 "qid": 0, 00:13:51.597 "state": "enabled", 00:13:51.597 "listen_address": { 00:13:51.597 "trtype": "RDMA", 00:13:51.597 "adrfam": "IPv4", 00:13:51.597 "traddr": "192.168.100.8", 00:13:51.597 "trsvcid": "4420" 00:13:51.597 }, 00:13:51.597 "peer_address": { 00:13:51.597 "trtype": "RDMA", 00:13:51.597 "adrfam": "IPv4", 00:13:51.597 "traddr": "192.168.100.8", 00:13:51.597 "trsvcid": "44571" 00:13:51.597 }, 00:13:51.597 "auth": { 00:13:51.597 "state": "completed", 00:13:51.597 "digest": "sha256", 00:13:51.597 "dhgroup": "ffdhe6144" 00:13:51.597 } 00:13:51.597 } 00:13:51.597 ]' 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:51.597 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:51.854 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:51.854 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.854 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.854 00:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.110 00:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.479 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:53.735 00:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:54.666 00:13:54.666 00:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:54.666 00:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:54.666 00:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:54.923 { 00:13:54.923 "cntlid": 41, 00:13:54.923 "qid": 0, 00:13:54.923 "state": "enabled", 00:13:54.923 "listen_address": { 00:13:54.923 "trtype": "RDMA", 00:13:54.923 "adrfam": "IPv4", 00:13:54.923 "traddr": "192.168.100.8", 00:13:54.923 "trsvcid": "4420" 00:13:54.923 }, 00:13:54.923 "peer_address": { 00:13:54.923 "trtype": "RDMA", 00:13:54.923 "adrfam": "IPv4", 00:13:54.923 "traddr": "192.168.100.8", 00:13:54.923 "trsvcid": "51461" 00:13:54.923 }, 00:13:54.923 "auth": { 00:13:54.923 "state": "completed", 00:13:54.923 "digest": "sha256", 00:13:54.923 "dhgroup": "ffdhe8192" 00:13:54.923 } 00:13:54.923 } 00:13:54.923 ]' 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.923 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:55.181 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.181 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.181 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.438 00:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:13:56.807 00:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:56.808 00:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:57.065 00:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:13:57.995 00:13:57.995 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:13:57.995 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:13:57.995 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:13:58.253 { 00:13:58.253 "cntlid": 43, 00:13:58.253 "qid": 0, 00:13:58.253 "state": "enabled", 00:13:58.253 "listen_address": { 00:13:58.253 "trtype": "RDMA", 00:13:58.253 "adrfam": "IPv4", 00:13:58.253 "traddr": "192.168.100.8", 00:13:58.253 "trsvcid": "4420" 00:13:58.253 }, 00:13:58.253 "peer_address": { 00:13:58.253 "trtype": "RDMA", 00:13:58.253 "adrfam": "IPv4", 00:13:58.253 "traddr": "192.168.100.8", 00:13:58.253 "trsvcid": "55942" 00:13:58.253 }, 00:13:58.253 "auth": { 00:13:58.253 "state": "completed", 00:13:58.253 "digest": "sha256", 00:13:58.253 "dhgroup": "ffdhe8192" 00:13:58.253 } 00:13:58.253 } 00:13:58.253 ]' 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.253 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.511 00:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:13:59.883 00:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:59.883 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:00.141 00:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:01.074 00:14:01.074 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:01.074 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:01.074 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.331 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:01.331 { 00:14:01.331 "cntlid": 45, 00:14:01.331 "qid": 0, 00:14:01.331 "state": "enabled", 00:14:01.331 "listen_address": { 00:14:01.331 "trtype": "RDMA", 00:14:01.331 "adrfam": "IPv4", 00:14:01.331 "traddr": "192.168.100.8", 00:14:01.331 "trsvcid": "4420" 00:14:01.331 }, 00:14:01.331 "peer_address": { 00:14:01.331 "trtype": "RDMA", 00:14:01.331 "adrfam": "IPv4", 00:14:01.331 "traddr": "192.168.100.8", 00:14:01.331 "trsvcid": "47328" 00:14:01.332 }, 00:14:01.332 "auth": { 00:14:01.332 "state": "completed", 00:14:01.332 "digest": "sha256", 00:14:01.332 "dhgroup": "ffdhe8192" 00:14:01.332 } 00:14:01.332 } 00:14:01.332 ]' 00:14:01.332 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.588 00:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.846 00:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:03.249 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.506 00:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.440 00:14:04.440 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:04.440 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:04.440 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:04.698 { 00:14:04.698 "cntlid": 47, 00:14:04.698 "qid": 0, 00:14:04.698 "state": "enabled", 00:14:04.698 "listen_address": { 00:14:04.698 "trtype": "RDMA", 00:14:04.698 "adrfam": "IPv4", 00:14:04.698 "traddr": "192.168.100.8", 00:14:04.698 "trsvcid": "4420" 00:14:04.698 }, 00:14:04.698 "peer_address": { 00:14:04.698 "trtype": "RDMA", 00:14:04.698 "adrfam": "IPv4", 00:14:04.698 "traddr": "192.168.100.8", 00:14:04.698 "trsvcid": "49412" 00:14:04.698 }, 00:14:04.698 "auth": { 00:14:04.698 "state": "completed", 00:14:04.698 "digest": "sha256", 00:14:04.698 "dhgroup": "ffdhe8192" 00:14:04.698 } 00:14:04.698 } 00:14:04.698 ]' 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.698 00:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:04.698 00:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.698 00:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.698 00:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.956 00:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:14:06.327 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.327 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.327 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.327 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:06.584 00:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:07.150 00:14:07.150 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:07.150 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:07.150 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.407 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:07.407 { 00:14:07.407 "cntlid": 49, 00:14:07.407 "qid": 0, 00:14:07.407 "state": "enabled", 00:14:07.407 "listen_address": { 00:14:07.407 "trtype": "RDMA", 00:14:07.407 "adrfam": "IPv4", 00:14:07.407 "traddr": "192.168.100.8", 00:14:07.407 "trsvcid": "4420" 00:14:07.408 }, 00:14:07.408 "peer_address": { 00:14:07.408 "trtype": "RDMA", 00:14:07.408 "adrfam": "IPv4", 00:14:07.408 "traddr": "192.168.100.8", 00:14:07.408 "trsvcid": "60943" 00:14:07.408 }, 00:14:07.408 "auth": { 00:14:07.408 "state": "completed", 00:14:07.408 "digest": "sha384", 00:14:07.408 "dhgroup": "null" 00:14:07.408 } 00:14:07.408 } 00:14:07.408 ]' 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.408 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.665 00:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:09.035 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:09.293 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:09.858 00:14:09.858 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:09.858 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:09.858 00:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:09.858 { 00:14:09.858 "cntlid": 51, 00:14:09.858 "qid": 0, 00:14:09.858 "state": "enabled", 00:14:09.858 "listen_address": { 00:14:09.858 "trtype": "RDMA", 00:14:09.858 "adrfam": "IPv4", 00:14:09.858 "traddr": "192.168.100.8", 00:14:09.858 "trsvcid": "4420" 00:14:09.858 }, 00:14:09.858 "peer_address": { 00:14:09.858 "trtype": "RDMA", 00:14:09.858 "adrfam": "IPv4", 00:14:09.858 "traddr": "192.168.100.8", 00:14:09.858 "trsvcid": "60674" 00:14:09.858 }, 00:14:09.858 "auth": { 00:14:09.858 "state": "completed", 00:14:09.858 "digest": "sha384", 00:14:09.858 "dhgroup": "null" 00:14:09.858 } 00:14:09.858 } 00:14:09.858 ]' 00:14:09.858 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.128 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.386 00:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:11.758 00:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:12.016 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:12.274 00:14:12.274 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:12.274 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:12.274 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:12.531 { 00:14:12.531 "cntlid": 53, 00:14:12.531 "qid": 0, 00:14:12.531 "state": "enabled", 00:14:12.531 "listen_address": { 00:14:12.531 "trtype": "RDMA", 00:14:12.531 "adrfam": "IPv4", 00:14:12.531 "traddr": "192.168.100.8", 00:14:12.531 "trsvcid": "4420" 00:14:12.531 }, 00:14:12.531 "peer_address": { 00:14:12.531 "trtype": "RDMA", 00:14:12.531 "adrfam": "IPv4", 00:14:12.531 "traddr": "192.168.100.8", 00:14:12.531 "trsvcid": "49501" 00:14:12.531 }, 00:14:12.531 "auth": { 00:14:12.531 "state": "completed", 00:14:12.531 "digest": "sha384", 00:14:12.531 "dhgroup": "null" 00:14:12.531 } 00:14:12.531 } 00:14:12.531 ]' 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.531 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:12.796 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:12.796 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:12.796 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.796 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.796 00:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.062 00:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:14.433 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.689 00:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:15.253 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.253 00:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:15.510 { 00:14:15.510 "cntlid": 55, 00:14:15.510 "qid": 0, 00:14:15.510 "state": "enabled", 00:14:15.510 "listen_address": { 00:14:15.510 "trtype": "RDMA", 00:14:15.510 "adrfam": "IPv4", 00:14:15.510 "traddr": "192.168.100.8", 00:14:15.510 "trsvcid": "4420" 00:14:15.510 }, 00:14:15.510 "peer_address": { 00:14:15.510 "trtype": "RDMA", 00:14:15.510 "adrfam": "IPv4", 00:14:15.510 "traddr": "192.168.100.8", 00:14:15.510 "trsvcid": "56640" 00:14:15.510 }, 00:14:15.510 "auth": { 00:14:15.510 "state": "completed", 00:14:15.510 "digest": "sha384", 00:14:15.510 "dhgroup": "null" 00:14:15.510 } 00:14:15.510 } 00:14:15.510 ]' 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.510 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.767 00:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:17.137 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:17.395 00:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:18.003 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:18.003 { 00:14:18.003 "cntlid": 57, 00:14:18.003 "qid": 0, 00:14:18.003 "state": "enabled", 00:14:18.003 "listen_address": { 00:14:18.003 "trtype": "RDMA", 00:14:18.003 "adrfam": "IPv4", 00:14:18.003 "traddr": "192.168.100.8", 00:14:18.003 "trsvcid": "4420" 00:14:18.003 }, 00:14:18.003 "peer_address": { 00:14:18.003 "trtype": "RDMA", 00:14:18.003 "adrfam": "IPv4", 00:14:18.003 "traddr": "192.168.100.8", 00:14:18.003 "trsvcid": "56294" 00:14:18.003 }, 00:14:18.003 "auth": { 00:14:18.003 "state": "completed", 00:14:18.003 "digest": "sha384", 00:14:18.003 "dhgroup": "ffdhe2048" 00:14:18.003 } 00:14:18.003 } 00:14:18.003 ]' 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.003 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:18.259 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:18.259 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:18.259 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.259 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.259 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.516 00:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:14:19.888 00:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:19.888 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:20.145 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:20.710 00:14:20.710 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:20.710 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:20.710 00:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.710 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.710 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.710 00:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.710 00:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:20.967 { 00:14:20.967 "cntlid": 59, 00:14:20.967 "qid": 0, 00:14:20.967 "state": "enabled", 00:14:20.967 "listen_address": { 00:14:20.967 "trtype": "RDMA", 00:14:20.967 "adrfam": "IPv4", 00:14:20.967 "traddr": "192.168.100.8", 00:14:20.967 "trsvcid": "4420" 00:14:20.967 }, 00:14:20.967 "peer_address": { 00:14:20.967 "trtype": "RDMA", 00:14:20.967 "adrfam": "IPv4", 00:14:20.967 "traddr": "192.168.100.8", 00:14:20.967 "trsvcid": "48592" 00:14:20.967 }, 00:14:20.967 "auth": { 00:14:20.967 "state": "completed", 00:14:20.967 "digest": "sha384", 00:14:20.967 "dhgroup": "ffdhe2048" 00:14:20.967 } 00:14:20.967 } 00:14:20.967 ]' 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:20.967 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.968 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.968 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.225 00:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.595 00:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:22.853 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:23.418 00:14:23.418 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:23.418 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:23.418 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:23.675 { 00:14:23.675 "cntlid": 61, 00:14:23.675 "qid": 0, 00:14:23.675 "state": "enabled", 00:14:23.675 "listen_address": { 00:14:23.675 "trtype": "RDMA", 00:14:23.675 "adrfam": "IPv4", 00:14:23.675 "traddr": "192.168.100.8", 00:14:23.675 "trsvcid": "4420" 00:14:23.675 }, 00:14:23.675 "peer_address": { 00:14:23.675 "trtype": "RDMA", 00:14:23.675 "adrfam": "IPv4", 00:14:23.675 "traddr": "192.168.100.8", 00:14:23.675 "trsvcid": "37303" 00:14:23.675 }, 00:14:23.675 "auth": { 00:14:23.675 "state": "completed", 00:14:23.675 "digest": "sha384", 00:14:23.675 "dhgroup": "ffdhe2048" 00:14:23.675 } 00:14:23.675 } 00:14:23.675 ]' 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.675 00:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.947 00:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:25.321 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.578 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.578 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.578 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.579 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:25.579 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.579 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.835 00:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.836 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.836 00:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.092 00:14:26.092 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:26.092 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:26.092 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:26.349 { 00:14:26.349 "cntlid": 63, 00:14:26.349 "qid": 0, 00:14:26.349 "state": "enabled", 00:14:26.349 "listen_address": { 00:14:26.349 "trtype": "RDMA", 00:14:26.349 "adrfam": "IPv4", 00:14:26.349 "traddr": "192.168.100.8", 00:14:26.349 "trsvcid": "4420" 00:14:26.349 }, 00:14:26.349 "peer_address": { 00:14:26.349 "trtype": "RDMA", 00:14:26.349 "adrfam": "IPv4", 00:14:26.349 "traddr": "192.168.100.8", 00:14:26.349 "trsvcid": "49922" 00:14:26.349 }, 00:14:26.349 "auth": { 00:14:26.349 "state": "completed", 00:14:26.349 "digest": "sha384", 00:14:26.349 "dhgroup": "ffdhe2048" 00:14:26.349 } 00:14:26.349 } 00:14:26.349 ]' 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.349 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:26.606 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.606 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.606 00:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.863 00:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.234 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:28.492 00:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:29.057 00:14:29.057 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:29.057 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.057 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:29.320 { 00:14:29.320 "cntlid": 65, 00:14:29.320 "qid": 0, 00:14:29.320 "state": "enabled", 00:14:29.320 "listen_address": { 00:14:29.320 "trtype": "RDMA", 00:14:29.320 "adrfam": "IPv4", 00:14:29.320 "traddr": "192.168.100.8", 00:14:29.320 "trsvcid": "4420" 00:14:29.320 }, 00:14:29.320 "peer_address": { 00:14:29.320 "trtype": "RDMA", 00:14:29.320 "adrfam": "IPv4", 00:14:29.320 "traddr": "192.168.100.8", 00:14:29.320 "trsvcid": "33613" 00:14:29.320 }, 00:14:29.320 "auth": { 00:14:29.320 "state": "completed", 00:14:29.320 "digest": "sha384", 00:14:29.320 "dhgroup": "ffdhe3072" 00:14:29.320 } 00:14:29.320 } 00:14:29.320 ]' 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.320 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.581 00:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:14:30.951 00:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:30.951 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:31.209 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:31.774 00:14:31.774 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:31.774 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:31.774 00:01:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:32.031 { 00:14:32.031 "cntlid": 67, 00:14:32.031 "qid": 0, 00:14:32.031 "state": "enabled", 00:14:32.031 "listen_address": { 00:14:32.031 "trtype": "RDMA", 00:14:32.031 "adrfam": "IPv4", 00:14:32.031 "traddr": "192.168.100.8", 00:14:32.031 "trsvcid": "4420" 00:14:32.031 }, 00:14:32.031 "peer_address": { 00:14:32.031 "trtype": "RDMA", 00:14:32.031 "adrfam": "IPv4", 00:14:32.031 "traddr": "192.168.100.8", 00:14:32.031 "trsvcid": "40147" 00:14:32.031 }, 00:14:32.031 "auth": { 00:14:32.031 "state": "completed", 00:14:32.031 "digest": "sha384", 00:14:32.031 "dhgroup": "ffdhe3072" 00:14:32.031 } 00:14:32.031 } 00:14:32.031 ]' 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.031 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.315 00:01:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:14:33.687 00:01:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.687 00:01:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.687 00:01:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.687 00:01:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.687 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.687 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:33.687 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.687 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:33.944 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:34.508 00:14:34.508 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:34.508 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:34.508 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:34.766 { 00:14:34.766 "cntlid": 69, 00:14:34.766 "qid": 0, 00:14:34.766 "state": "enabled", 00:14:34.766 "listen_address": { 00:14:34.766 "trtype": "RDMA", 00:14:34.766 "adrfam": "IPv4", 00:14:34.766 "traddr": "192.168.100.8", 00:14:34.766 "trsvcid": "4420" 00:14:34.766 }, 00:14:34.766 "peer_address": { 00:14:34.766 "trtype": "RDMA", 00:14:34.766 "adrfam": "IPv4", 00:14:34.766 "traddr": "192.168.100.8", 00:14:34.766 "trsvcid": "42336" 00:14:34.766 }, 00:14:34.766 "auth": { 00:14:34.766 "state": "completed", 00:14:34.766 "digest": "sha384", 00:14:34.766 "dhgroup": "ffdhe3072" 00:14:34.766 } 00:14:34.766 } 00:14:34.766 ]' 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.766 00:01:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:34.766 00:01:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.766 00:01:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.766 00:01:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.023 00:01:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:36.394 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.651 00:01:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.225 00:14:37.225 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:37.225 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.225 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.482 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:37.482 { 00:14:37.482 "cntlid": 71, 00:14:37.482 "qid": 0, 00:14:37.482 "state": "enabled", 00:14:37.482 "listen_address": { 00:14:37.482 "trtype": "RDMA", 00:14:37.482 "adrfam": "IPv4", 00:14:37.482 "traddr": "192.168.100.8", 00:14:37.482 "trsvcid": "4420" 00:14:37.482 }, 00:14:37.482 "peer_address": { 00:14:37.482 "trtype": "RDMA", 00:14:37.482 "adrfam": "IPv4", 00:14:37.483 "traddr": "192.168.100.8", 00:14:37.483 "trsvcid": "46848" 00:14:37.483 }, 00:14:37.483 "auth": { 00:14:37.483 "state": "completed", 00:14:37.483 "digest": "sha384", 00:14:37.483 "dhgroup": "ffdhe3072" 00:14:37.483 } 00:14:37.483 } 00:14:37.483 ]' 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.483 00:01:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.740 00:01:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:14:39.112 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.369 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:39.626 00:01:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:39.884 00:14:39.884 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:39.884 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:39.884 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:40.142 { 00:14:40.142 "cntlid": 73, 00:14:40.142 "qid": 0, 00:14:40.142 "state": "enabled", 00:14:40.142 "listen_address": { 00:14:40.142 "trtype": "RDMA", 00:14:40.142 "adrfam": "IPv4", 00:14:40.142 "traddr": "192.168.100.8", 00:14:40.142 "trsvcid": "4420" 00:14:40.142 }, 00:14:40.142 "peer_address": { 00:14:40.142 "trtype": "RDMA", 00:14:40.142 "adrfam": "IPv4", 00:14:40.142 "traddr": "192.168.100.8", 00:14:40.142 "trsvcid": "48189" 00:14:40.142 }, 00:14:40.142 "auth": { 00:14:40.142 "state": "completed", 00:14:40.142 "digest": "sha384", 00:14:40.142 "dhgroup": "ffdhe4096" 00:14:40.142 } 00:14:40.142 } 00:14:40.142 ]' 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.142 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:40.399 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.399 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.399 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.657 00:01:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:42.027 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:42.284 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:42.848 00:14:42.848 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:42.848 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.848 00:01:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:43.106 { 00:14:43.106 "cntlid": 75, 00:14:43.106 "qid": 0, 00:14:43.106 "state": "enabled", 00:14:43.106 "listen_address": { 00:14:43.106 "trtype": "RDMA", 00:14:43.106 "adrfam": "IPv4", 00:14:43.106 "traddr": "192.168.100.8", 00:14:43.106 "trsvcid": "4420" 00:14:43.106 }, 00:14:43.106 "peer_address": { 00:14:43.106 "trtype": "RDMA", 00:14:43.106 "adrfam": "IPv4", 00:14:43.106 "traddr": "192.168.100.8", 00:14:43.106 "trsvcid": "34870" 00:14:43.106 }, 00:14:43.106 "auth": { 00:14:43.106 "state": "completed", 00:14:43.106 "digest": "sha384", 00:14:43.106 "dhgroup": "ffdhe4096" 00:14:43.106 } 00:14:43.106 } 00:14:43.106 ]' 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.106 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.363 00:01:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:44.733 00:01:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:44.991 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:45.557 00:14:45.557 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:45.557 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:45.557 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:45.814 { 00:14:45.814 "cntlid": 77, 00:14:45.814 "qid": 0, 00:14:45.814 "state": "enabled", 00:14:45.814 "listen_address": { 00:14:45.814 "trtype": "RDMA", 00:14:45.814 "adrfam": "IPv4", 00:14:45.814 "traddr": "192.168.100.8", 00:14:45.814 "trsvcid": "4420" 00:14:45.814 }, 00:14:45.814 "peer_address": { 00:14:45.814 "trtype": "RDMA", 00:14:45.814 "adrfam": "IPv4", 00:14:45.814 "traddr": "192.168.100.8", 00:14:45.814 "trsvcid": "46300" 00:14:45.814 }, 00:14:45.814 "auth": { 00:14:45.814 "state": "completed", 00:14:45.814 "digest": "sha384", 00:14:45.814 "dhgroup": "ffdhe4096" 00:14:45.814 } 00:14:45.814 } 00:14:45.814 ]' 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.814 00:01:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:45.814 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.814 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:45.814 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.814 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.814 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.072 00:01:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:47.508 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:47.509 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.766 00:01:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.023 00:14:48.023 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:48.023 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:48.023 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:48.280 { 00:14:48.280 "cntlid": 79, 00:14:48.280 "qid": 0, 00:14:48.280 "state": "enabled", 00:14:48.280 "listen_address": { 00:14:48.280 "trtype": "RDMA", 00:14:48.280 "adrfam": "IPv4", 00:14:48.280 "traddr": "192.168.100.8", 00:14:48.280 "trsvcid": "4420" 00:14:48.280 }, 00:14:48.280 "peer_address": { 00:14:48.280 "trtype": "RDMA", 00:14:48.280 "adrfam": "IPv4", 00:14:48.280 "traddr": "192.168.100.8", 00:14:48.280 "trsvcid": "55250" 00:14:48.280 }, 00:14:48.280 "auth": { 00:14:48.280 "state": "completed", 00:14:48.280 "digest": "sha384", 00:14:48.280 "dhgroup": "ffdhe4096" 00:14:48.280 } 00:14:48.280 } 00:14:48.280 ]' 00:14:48.280 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.537 00:01:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.794 00:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:14:50.163 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.163 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.163 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.163 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.163 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.164 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.164 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:50.164 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.164 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.420 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:50.421 00:01:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:50.985 00:14:50.985 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:50.985 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:50.985 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:51.243 { 00:14:51.243 "cntlid": 81, 00:14:51.243 "qid": 0, 00:14:51.243 "state": "enabled", 00:14:51.243 "listen_address": { 00:14:51.243 "trtype": "RDMA", 00:14:51.243 "adrfam": "IPv4", 00:14:51.243 "traddr": "192.168.100.8", 00:14:51.243 "trsvcid": "4420" 00:14:51.243 }, 00:14:51.243 "peer_address": { 00:14:51.243 "trtype": "RDMA", 00:14:51.243 "adrfam": "IPv4", 00:14:51.243 "traddr": "192.168.100.8", 00:14:51.243 "trsvcid": "41480" 00:14:51.243 }, 00:14:51.243 "auth": { 00:14:51.243 "state": "completed", 00:14:51.243 "digest": "sha384", 00:14:51.243 "dhgroup": "ffdhe6144" 00:14:51.243 } 00:14:51.243 } 00:14:51.243 ]' 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.243 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:51.499 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.499 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:51.499 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.499 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.499 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.757 00:01:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:53.133 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:53.391 00:01:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:14:53.954 00:14:53.954 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:53.954 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:53.954 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:54.212 { 00:14:54.212 "cntlid": 83, 00:14:54.212 "qid": 0, 00:14:54.212 "state": "enabled", 00:14:54.212 "listen_address": { 00:14:54.212 "trtype": "RDMA", 00:14:54.212 "adrfam": "IPv4", 00:14:54.212 "traddr": "192.168.100.8", 00:14:54.212 "trsvcid": "4420" 00:14:54.212 }, 00:14:54.212 "peer_address": { 00:14:54.212 "trtype": "RDMA", 00:14:54.212 "adrfam": "IPv4", 00:14:54.212 "traddr": "192.168.100.8", 00:14:54.212 "trsvcid": "42822" 00:14:54.212 }, 00:14:54.212 "auth": { 00:14:54.212 "state": "completed", 00:14:54.212 "digest": "sha384", 00:14:54.212 "dhgroup": "ffdhe6144" 00:14:54.212 } 00:14:54.212 } 00:14:54.212 ]' 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.212 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.469 00:01:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:14:55.840 00:01:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:55.840 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:56.414 00:01:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:56.978 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:56.978 { 00:14:56.978 "cntlid": 85, 00:14:56.978 "qid": 0, 00:14:56.978 "state": "enabled", 00:14:56.978 "listen_address": { 00:14:56.978 "trtype": "RDMA", 00:14:56.978 "adrfam": "IPv4", 00:14:56.978 "traddr": "192.168.100.8", 00:14:56.978 "trsvcid": "4420" 00:14:56.978 }, 00:14:56.978 "peer_address": { 00:14:56.978 "trtype": "RDMA", 00:14:56.978 "adrfam": "IPv4", 00:14:56.978 "traddr": "192.168.100.8", 00:14:56.978 "trsvcid": "50081" 00:14:56.978 }, 00:14:56.978 "auth": { 00:14:56.978 "state": "completed", 00:14:56.978 "digest": "sha384", 00:14:56.978 "dhgroup": "ffdhe6144" 00:14:56.978 } 00:14:56.978 } 00:14:56.978 ]' 00:14:56.978 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.235 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.492 00:01:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:14:58.860 00:01:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:58.860 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.117 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.681 00:14:59.681 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:14:59.681 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:14:59.681 00:01:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:14:59.938 { 00:14:59.938 "cntlid": 87, 00:14:59.938 "qid": 0, 00:14:59.938 "state": "enabled", 00:14:59.938 "listen_address": { 00:14:59.938 "trtype": "RDMA", 00:14:59.938 "adrfam": "IPv4", 00:14:59.938 "traddr": "192.168.100.8", 00:14:59.938 "trsvcid": "4420" 00:14:59.938 }, 00:14:59.938 "peer_address": { 00:14:59.938 "trtype": "RDMA", 00:14:59.938 "adrfam": "IPv4", 00:14:59.938 "traddr": "192.168.100.8", 00:14:59.938 "trsvcid": "48005" 00:14:59.938 }, 00:14:59.938 "auth": { 00:14:59.938 "state": "completed", 00:14:59.938 "digest": "sha384", 00:14:59.938 "dhgroup": "ffdhe6144" 00:14:59.938 } 00:14:59.938 } 00:14:59.938 ]' 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.938 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:00.195 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.195 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:00.195 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.195 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.195 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.452 00:01:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:01.852 00:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:01.852 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:02.109 00:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:03.039 00:15:03.039 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:03.039 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:03.039 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:03.296 { 00:15:03.296 "cntlid": 89, 00:15:03.296 "qid": 0, 00:15:03.296 "state": "enabled", 00:15:03.296 "listen_address": { 00:15:03.296 "trtype": "RDMA", 00:15:03.296 "adrfam": "IPv4", 00:15:03.296 "traddr": "192.168.100.8", 00:15:03.296 "trsvcid": "4420" 00:15:03.296 }, 00:15:03.296 "peer_address": { 00:15:03.296 "trtype": "RDMA", 00:15:03.296 "adrfam": "IPv4", 00:15:03.296 "traddr": "192.168.100.8", 00:15:03.296 "trsvcid": "37621" 00:15:03.296 }, 00:15:03.296 "auth": { 00:15:03.296 "state": "completed", 00:15:03.296 "digest": "sha384", 00:15:03.296 "dhgroup": "ffdhe8192" 00:15:03.296 } 00:15:03.296 } 00:15:03.296 ]' 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.296 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:03.553 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.553 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.553 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.810 00:01:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:05.180 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:05.438 00:01:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:06.371 00:15:06.371 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:06.371 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:06.371 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:06.629 { 00:15:06.629 "cntlid": 91, 00:15:06.629 "qid": 0, 00:15:06.629 "state": "enabled", 00:15:06.629 "listen_address": { 00:15:06.629 "trtype": "RDMA", 00:15:06.629 "adrfam": "IPv4", 00:15:06.629 "traddr": "192.168.100.8", 00:15:06.629 "trsvcid": "4420" 00:15:06.629 }, 00:15:06.629 "peer_address": { 00:15:06.629 "trtype": "RDMA", 00:15:06.629 "adrfam": "IPv4", 00:15:06.629 "traddr": "192.168.100.8", 00:15:06.629 "trsvcid": "49640" 00:15:06.629 }, 00:15:06.629 "auth": { 00:15:06.629 "state": "completed", 00:15:06.629 "digest": "sha384", 00:15:06.629 "dhgroup": "ffdhe8192" 00:15:06.629 } 00:15:06.629 } 00:15:06.629 ]' 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.629 00:01:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.887 00:01:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:15:08.259 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:08.517 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:08.774 00:01:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:09.708 00:15:09.708 00:01:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:09.708 00:01:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:09.708 00:01:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:09.966 { 00:15:09.966 "cntlid": 93, 00:15:09.966 "qid": 0, 00:15:09.966 "state": "enabled", 00:15:09.966 "listen_address": { 00:15:09.966 "trtype": "RDMA", 00:15:09.966 "adrfam": "IPv4", 00:15:09.966 "traddr": "192.168.100.8", 00:15:09.966 "trsvcid": "4420" 00:15:09.966 }, 00:15:09.966 "peer_address": { 00:15:09.966 "trtype": "RDMA", 00:15:09.966 "adrfam": "IPv4", 00:15:09.966 "traddr": "192.168.100.8", 00:15:09.966 "trsvcid": "52966" 00:15:09.966 }, 00:15:09.966 "auth": { 00:15:09.966 "state": "completed", 00:15:09.966 "digest": "sha384", 00:15:09.966 "dhgroup": "ffdhe8192" 00:15:09.966 } 00:15:09.966 } 00:15:09.966 ]' 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.966 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.223 00:01:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:11.594 00:01:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:11.854 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:15:11.854 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:11.854 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.854 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:11.854 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.855 00:01:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.788 00:15:12.788 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:12.788 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:12.788 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.045 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:13.045 { 00:15:13.045 "cntlid": 95, 00:15:13.045 "qid": 0, 00:15:13.045 "state": "enabled", 00:15:13.045 "listen_address": { 00:15:13.045 "trtype": "RDMA", 00:15:13.045 "adrfam": "IPv4", 00:15:13.045 "traddr": "192.168.100.8", 00:15:13.045 "trsvcid": "4420" 00:15:13.045 }, 00:15:13.045 "peer_address": { 00:15:13.045 "trtype": "RDMA", 00:15:13.045 "adrfam": "IPv4", 00:15:13.045 "traddr": "192.168.100.8", 00:15:13.045 "trsvcid": "47881" 00:15:13.045 }, 00:15:13.045 "auth": { 00:15:13.045 "state": "completed", 00:15:13.045 "digest": "sha384", 00:15:13.045 "dhgroup": "ffdhe8192" 00:15:13.045 } 00:15:13.045 } 00:15:13.045 ]' 00:15:13.046 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.302 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.560 00:01:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:14.932 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:15.190 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:15.756 00:15:15.756 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:15.756 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.756 00:01:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:16.041 { 00:15:16.041 "cntlid": 97, 00:15:16.041 "qid": 0, 00:15:16.041 "state": "enabled", 00:15:16.041 "listen_address": { 00:15:16.041 "trtype": "RDMA", 00:15:16.041 "adrfam": "IPv4", 00:15:16.041 "traddr": "192.168.100.8", 00:15:16.041 "trsvcid": "4420" 00:15:16.041 }, 00:15:16.041 "peer_address": { 00:15:16.041 "trtype": "RDMA", 00:15:16.041 "adrfam": "IPv4", 00:15:16.041 "traddr": "192.168.100.8", 00:15:16.041 "trsvcid": "47851" 00:15:16.041 }, 00:15:16.041 "auth": { 00:15:16.041 "state": "completed", 00:15:16.041 "digest": "sha512", 00:15:16.041 "dhgroup": "null" 00:15:16.041 } 00:15:16.041 } 00:15:16.041 ]' 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.041 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.303 00:01:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:17.675 00:01:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:17.676 00:01:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:17.935 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:18.501 00:15:18.501 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:18.501 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.501 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:18.760 { 00:15:18.760 "cntlid": 99, 00:15:18.760 "qid": 0, 00:15:18.760 "state": "enabled", 00:15:18.760 "listen_address": { 00:15:18.760 "trtype": "RDMA", 00:15:18.760 "adrfam": "IPv4", 00:15:18.760 "traddr": "192.168.100.8", 00:15:18.760 "trsvcid": "4420" 00:15:18.760 }, 00:15:18.760 "peer_address": { 00:15:18.760 "trtype": "RDMA", 00:15:18.760 "adrfam": "IPv4", 00:15:18.760 "traddr": "192.168.100.8", 00:15:18.760 "trsvcid": "56876" 00:15:18.760 }, 00:15:18.760 "auth": { 00:15:18.760 "state": "completed", 00:15:18.760 "digest": "sha512", 00:15:18.760 "dhgroup": "null" 00:15:18.760 } 00:15:18.760 } 00:15:18.760 ]' 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:18.760 00:01:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:18.760 00:01:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.760 00:01:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.760 00:01:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.018 00:01:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:15:20.389 00:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:20.646 00:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:20.903 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:21.160 00:15:21.160 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:21.160 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:21.160 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:21.418 { 00:15:21.418 "cntlid": 101, 00:15:21.418 "qid": 0, 00:15:21.418 "state": "enabled", 00:15:21.418 "listen_address": { 00:15:21.418 "trtype": "RDMA", 00:15:21.418 "adrfam": "IPv4", 00:15:21.418 "traddr": "192.168.100.8", 00:15:21.418 "trsvcid": "4420" 00:15:21.418 }, 00:15:21.418 "peer_address": { 00:15:21.418 "trtype": "RDMA", 00:15:21.418 "adrfam": "IPv4", 00:15:21.418 "traddr": "192.168.100.8", 00:15:21.418 "trsvcid": "46345" 00:15:21.418 }, 00:15:21.418 "auth": { 00:15:21.418 "state": "completed", 00:15:21.418 "digest": "sha512", 00:15:21.418 "dhgroup": "null" 00:15:21.418 } 00:15:21.418 } 00:15:21.418 ]' 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.418 00:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.675 00:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:15:23.046 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.046 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.046 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.046 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.303 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.304 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.882 00:15:23.882 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:23.882 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:23.882 00:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.882 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:23.882 { 00:15:23.882 "cntlid": 103, 00:15:23.882 "qid": 0, 00:15:23.882 "state": "enabled", 00:15:23.883 "listen_address": { 00:15:23.883 "trtype": "RDMA", 00:15:23.883 "adrfam": "IPv4", 00:15:23.883 "traddr": "192.168.100.8", 00:15:23.883 "trsvcid": "4420" 00:15:23.883 }, 00:15:23.883 "peer_address": { 00:15:23.883 "trtype": "RDMA", 00:15:23.883 "adrfam": "IPv4", 00:15:23.883 "traddr": "192.168.100.8", 00:15:23.883 "trsvcid": "46672" 00:15:23.883 }, 00:15:23.883 "auth": { 00:15:23.883 "state": "completed", 00:15:23.883 "digest": "sha512", 00:15:23.883 "dhgroup": "null" 00:15:23.883 } 00:15:23.883 } 00:15:23.883 ]' 00:15:23.883 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.142 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.399 00:01:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:25.774 00:01:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:25.774 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:25.775 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:26.039 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:26.604 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.604 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:26.604 { 00:15:26.604 "cntlid": 105, 00:15:26.604 "qid": 0, 00:15:26.604 "state": "enabled", 00:15:26.604 "listen_address": { 00:15:26.604 "trtype": "RDMA", 00:15:26.604 "adrfam": "IPv4", 00:15:26.604 "traddr": "192.168.100.8", 00:15:26.604 "trsvcid": "4420" 00:15:26.604 }, 00:15:26.604 "peer_address": { 00:15:26.605 "trtype": "RDMA", 00:15:26.605 "adrfam": "IPv4", 00:15:26.605 "traddr": "192.168.100.8", 00:15:26.605 "trsvcid": "47449" 00:15:26.605 }, 00:15:26.605 "auth": { 00:15:26.605 "state": "completed", 00:15:26.605 "digest": "sha512", 00:15:26.605 "dhgroup": "ffdhe2048" 00:15:26.605 } 00:15:26.605 } 00:15:26.605 ]' 00:15:26.605 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:26.861 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.861 00:01:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:26.861 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.861 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:26.861 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.861 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.861 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.119 00:01:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:28.490 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.749 00:01:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.749 00:01:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.749 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:28.749 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:29.007 00:15:29.007 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:29.007 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:29.007 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:29.264 { 00:15:29.264 "cntlid": 107, 00:15:29.264 "qid": 0, 00:15:29.264 "state": "enabled", 00:15:29.264 "listen_address": { 00:15:29.264 "trtype": "RDMA", 00:15:29.264 "adrfam": "IPv4", 00:15:29.264 "traddr": "192.168.100.8", 00:15:29.264 "trsvcid": "4420" 00:15:29.264 }, 00:15:29.264 "peer_address": { 00:15:29.264 "trtype": "RDMA", 00:15:29.264 "adrfam": "IPv4", 00:15:29.264 "traddr": "192.168.100.8", 00:15:29.264 "trsvcid": "44241" 00:15:29.264 }, 00:15:29.264 "auth": { 00:15:29.264 "state": "completed", 00:15:29.264 "digest": "sha512", 00:15:29.264 "dhgroup": "ffdhe2048" 00:15:29.264 } 00:15:29.264 } 00:15:29.264 ]' 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.264 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:29.521 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:29.521 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:29.522 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.522 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.522 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.779 00:01:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:31.177 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:31.452 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:31.709 00:15:31.709 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:31.709 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:31.709 00:02:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:31.966 { 00:15:31.966 "cntlid": 109, 00:15:31.966 "qid": 0, 00:15:31.966 "state": "enabled", 00:15:31.966 "listen_address": { 00:15:31.966 "trtype": "RDMA", 00:15:31.966 "adrfam": "IPv4", 00:15:31.966 "traddr": "192.168.100.8", 00:15:31.966 "trsvcid": "4420" 00:15:31.966 }, 00:15:31.966 "peer_address": { 00:15:31.966 "trtype": "RDMA", 00:15:31.966 "adrfam": "IPv4", 00:15:31.966 "traddr": "192.168.100.8", 00:15:31.966 "trsvcid": "40291" 00:15:31.966 }, 00:15:31.966 "auth": { 00:15:31.966 "state": "completed", 00:15:31.966 "digest": "sha512", 00:15:31.966 "dhgroup": "ffdhe2048" 00:15:31.966 } 00:15:31.966 } 00:15:31.966 ]' 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.966 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:32.223 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.223 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:32.223 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.223 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.223 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.480 00:02:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:15:33.849 00:02:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:33.849 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.118 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.119 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.380 00:15:34.380 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:34.380 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.380 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:34.638 { 00:15:34.638 "cntlid": 111, 00:15:34.638 "qid": 0, 00:15:34.638 "state": "enabled", 00:15:34.638 "listen_address": { 00:15:34.638 "trtype": "RDMA", 00:15:34.638 "adrfam": "IPv4", 00:15:34.638 "traddr": "192.168.100.8", 00:15:34.638 "trsvcid": "4420" 00:15:34.638 }, 00:15:34.638 "peer_address": { 00:15:34.638 "trtype": "RDMA", 00:15:34.638 "adrfam": "IPv4", 00:15:34.638 "traddr": "192.168.100.8", 00:15:34.638 "trsvcid": "51617" 00:15:34.638 }, 00:15:34.638 "auth": { 00:15:34.638 "state": "completed", 00:15:34.638 "digest": "sha512", 00:15:34.638 "dhgroup": "ffdhe2048" 00:15:34.638 } 00:15:34.638 } 00:15:34.638 ]' 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.638 00:02:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:34.895 00:02:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.895 00:02:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.895 00:02:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.152 00:02:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:36.084 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:36.342 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.602 00:02:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.603 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:36.603 00:02:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:36.862 00:15:37.118 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:37.118 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:37.118 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:37.375 { 00:15:37.375 "cntlid": 113, 00:15:37.375 "qid": 0, 00:15:37.375 "state": "enabled", 00:15:37.375 "listen_address": { 00:15:37.375 "trtype": "RDMA", 00:15:37.375 "adrfam": "IPv4", 00:15:37.375 "traddr": "192.168.100.8", 00:15:37.375 "trsvcid": "4420" 00:15:37.375 }, 00:15:37.375 "peer_address": { 00:15:37.375 "trtype": "RDMA", 00:15:37.375 "adrfam": "IPv4", 00:15:37.375 "traddr": "192.168.100.8", 00:15:37.375 "trsvcid": "52052" 00:15:37.375 }, 00:15:37.375 "auth": { 00:15:37.375 "state": "completed", 00:15:37.375 "digest": "sha512", 00:15:37.375 "dhgroup": "ffdhe3072" 00:15:37.375 } 00:15:37.375 } 00:15:37.375 ]' 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.375 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.632 00:02:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:39.001 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:39.258 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:15:39.258 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:39.258 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:39.259 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:39.823 00:15:39.823 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:39.823 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:39.823 00:02:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.079 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:40.079 { 00:15:40.079 "cntlid": 115, 00:15:40.079 "qid": 0, 00:15:40.079 "state": "enabled", 00:15:40.079 "listen_address": { 00:15:40.079 "trtype": "RDMA", 00:15:40.079 "adrfam": "IPv4", 00:15:40.079 "traddr": "192.168.100.8", 00:15:40.079 "trsvcid": "4420" 00:15:40.080 }, 00:15:40.080 "peer_address": { 00:15:40.080 "trtype": "RDMA", 00:15:40.080 "adrfam": "IPv4", 00:15:40.080 "traddr": "192.168.100.8", 00:15:40.080 "trsvcid": "35212" 00:15:40.080 }, 00:15:40.080 "auth": { 00:15:40.080 "state": "completed", 00:15:40.080 "digest": "sha512", 00:15:40.080 "dhgroup": "ffdhe3072" 00:15:40.080 } 00:15:40.080 } 00:15:40.080 ]' 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.080 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.337 00:02:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:15:41.706 00:02:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:41.706 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:42.269 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:42.527 00:15:42.527 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:42.527 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:42.527 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:42.784 { 00:15:42.784 "cntlid": 117, 00:15:42.784 "qid": 0, 00:15:42.784 "state": "enabled", 00:15:42.784 "listen_address": { 00:15:42.784 "trtype": "RDMA", 00:15:42.784 "adrfam": "IPv4", 00:15:42.784 "traddr": "192.168.100.8", 00:15:42.784 "trsvcid": "4420" 00:15:42.784 }, 00:15:42.784 "peer_address": { 00:15:42.784 "trtype": "RDMA", 00:15:42.784 "adrfam": "IPv4", 00:15:42.784 "traddr": "192.168.100.8", 00:15:42.784 "trsvcid": "55746" 00:15:42.784 }, 00:15:42.784 "auth": { 00:15:42.784 "state": "completed", 00:15:42.784 "digest": "sha512", 00:15:42.784 "dhgroup": "ffdhe3072" 00:15:42.784 } 00:15:42.784 } 00:15:42.784 ]' 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.784 00:02:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:42.784 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.784 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:42.784 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.784 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.784 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.042 00:02:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.431 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.688 00:02:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.688 00:02:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.688 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.688 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.261 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:45.261 { 00:15:45.261 "cntlid": 119, 00:15:45.261 "qid": 0, 00:15:45.261 "state": "enabled", 00:15:45.261 "listen_address": { 00:15:45.261 "trtype": "RDMA", 00:15:45.261 "adrfam": "IPv4", 00:15:45.261 "traddr": "192.168.100.8", 00:15:45.261 "trsvcid": "4420" 00:15:45.261 }, 00:15:45.261 "peer_address": { 00:15:45.261 "trtype": "RDMA", 00:15:45.261 "adrfam": "IPv4", 00:15:45.261 "traddr": "192.168.100.8", 00:15:45.261 "trsvcid": "32930" 00:15:45.261 }, 00:15:45.261 "auth": { 00:15:45.261 "state": "completed", 00:15:45.261 "digest": "sha512", 00:15:45.261 "dhgroup": "ffdhe3072" 00:15:45.261 } 00:15:45.261 } 00:15:45.261 ]' 00:15:45.261 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.518 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.776 00:02:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:47.146 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:47.403 00:02:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:47.967 00:15:47.967 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:47.967 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:47.967 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:48.225 { 00:15:48.225 "cntlid": 121, 00:15:48.225 "qid": 0, 00:15:48.225 "state": "enabled", 00:15:48.225 "listen_address": { 00:15:48.225 "trtype": "RDMA", 00:15:48.225 "adrfam": "IPv4", 00:15:48.225 "traddr": "192.168.100.8", 00:15:48.225 "trsvcid": "4420" 00:15:48.225 }, 00:15:48.225 "peer_address": { 00:15:48.225 "trtype": "RDMA", 00:15:48.225 "adrfam": "IPv4", 00:15:48.225 "traddr": "192.168.100.8", 00:15:48.225 "trsvcid": "57218" 00:15:48.225 }, 00:15:48.225 "auth": { 00:15:48.225 "state": "completed", 00:15:48.225 "digest": "sha512", 00:15:48.225 "dhgroup": "ffdhe4096" 00:15:48.225 } 00:15:48.225 } 00:15:48.225 ]' 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.225 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.483 00:02:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:15:49.855 00:02:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.113 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:50.370 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:50.628 00:15:50.628 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:50.628 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.628 00:02:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:50.885 { 00:15:50.885 "cntlid": 123, 00:15:50.885 "qid": 0, 00:15:50.885 "state": "enabled", 00:15:50.885 "listen_address": { 00:15:50.885 "trtype": "RDMA", 00:15:50.885 "adrfam": "IPv4", 00:15:50.885 "traddr": "192.168.100.8", 00:15:50.885 "trsvcid": "4420" 00:15:50.885 }, 00:15:50.885 "peer_address": { 00:15:50.885 "trtype": "RDMA", 00:15:50.885 "adrfam": "IPv4", 00:15:50.885 "traddr": "192.168.100.8", 00:15:50.885 "trsvcid": "53469" 00:15:50.885 }, 00:15:50.885 "auth": { 00:15:50.885 "state": "completed", 00:15:50.885 "digest": "sha512", 00:15:50.885 "dhgroup": "ffdhe4096" 00:15:50.885 } 00:15:50.885 } 00:15:50.885 ]' 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.885 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.142 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.142 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.142 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.399 00:02:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.767 00:02:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.768 00:02:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.025 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.590 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:53.590 { 00:15:53.590 "cntlid": 125, 00:15:53.590 "qid": 0, 00:15:53.590 "state": "enabled", 00:15:53.590 "listen_address": { 00:15:53.590 "trtype": "RDMA", 00:15:53.590 "adrfam": "IPv4", 00:15:53.590 "traddr": "192.168.100.8", 00:15:53.590 "trsvcid": "4420" 00:15:53.590 }, 00:15:53.590 "peer_address": { 00:15:53.590 "trtype": "RDMA", 00:15:53.590 "adrfam": "IPv4", 00:15:53.590 "traddr": "192.168.100.8", 00:15:53.590 "trsvcid": "35508" 00:15:53.590 }, 00:15:53.590 "auth": { 00:15:53.590 "state": "completed", 00:15:53.590 "digest": "sha512", 00:15:53.590 "dhgroup": "ffdhe4096" 00:15:53.590 } 00:15:53.590 } 00:15:53.590 ]' 00:15:53.590 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:53.847 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.847 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:53.847 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.847 00:02:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:53.847 00:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.847 00:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.847 00:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.105 00:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.472 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.729 00:02:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.729 00:02:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.729 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.729 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.294 00:15:56.294 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:56.294 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:56.294 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:56.551 { 00:15:56.551 "cntlid": 127, 00:15:56.551 "qid": 0, 00:15:56.551 "state": "enabled", 00:15:56.551 "listen_address": { 00:15:56.551 "trtype": "RDMA", 00:15:56.551 "adrfam": "IPv4", 00:15:56.551 "traddr": "192.168.100.8", 00:15:56.551 "trsvcid": "4420" 00:15:56.551 }, 00:15:56.551 "peer_address": { 00:15:56.551 "trtype": "RDMA", 00:15:56.551 "adrfam": "IPv4", 00:15:56.551 "traddr": "192.168.100.8", 00:15:56.551 "trsvcid": "50583" 00:15:56.551 }, 00:15:56.551 "auth": { 00:15:56.551 "state": "completed", 00:15:56.551 "digest": "sha512", 00:15:56.551 "dhgroup": "ffdhe4096" 00:15:56.551 } 00:15:56.551 } 00:15:56.551 ]' 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.551 00:02:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.809 00:02:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.180 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:58.746 00:02:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:59.047 00:15:59.047 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:59.047 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:59.047 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:59.305 { 00:15:59.305 "cntlid": 129, 00:15:59.305 "qid": 0, 00:15:59.305 "state": "enabled", 00:15:59.305 "listen_address": { 00:15:59.305 "trtype": "RDMA", 00:15:59.305 "adrfam": "IPv4", 00:15:59.305 "traddr": "192.168.100.8", 00:15:59.305 "trsvcid": "4420" 00:15:59.305 }, 00:15:59.305 "peer_address": { 00:15:59.305 "trtype": "RDMA", 00:15:59.305 "adrfam": "IPv4", 00:15:59.305 "traddr": "192.168.100.8", 00:15:59.305 "trsvcid": "60127" 00:15:59.305 }, 00:15:59.305 "auth": { 00:15:59.305 "state": "completed", 00:15:59.305 "digest": "sha512", 00:15:59.305 "dhgroup": "ffdhe6144" 00:15:59.305 } 00:15:59.305 } 00:15:59.305 ]' 00:15:59.305 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.562 00:02:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.819 00:02:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:01.191 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:01.448 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:16:01.448 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:01.449 00:02:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:02.014 00:16:02.014 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:02.014 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:02.014 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:02.272 { 00:16:02.272 "cntlid": 131, 00:16:02.272 "qid": 0, 00:16:02.272 "state": "enabled", 00:16:02.272 "listen_address": { 00:16:02.272 "trtype": "RDMA", 00:16:02.272 "adrfam": "IPv4", 00:16:02.272 "traddr": "192.168.100.8", 00:16:02.272 "trsvcid": "4420" 00:16:02.272 }, 00:16:02.272 "peer_address": { 00:16:02.272 "trtype": "RDMA", 00:16:02.272 "adrfam": "IPv4", 00:16:02.272 "traddr": "192.168.100.8", 00:16:02.272 "trsvcid": "40588" 00:16:02.272 }, 00:16:02.272 "auth": { 00:16:02.272 "state": "completed", 00:16:02.272 "digest": "sha512", 00:16:02.272 "dhgroup": "ffdhe6144" 00:16:02.272 } 00:16:02.272 } 00:16:02.272 ]' 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.272 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.530 00:02:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.901 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.159 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:04.160 00:02:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:04.722 00:16:04.722 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:04.722 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:04.722 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.978 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.978 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.978 00:02:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.978 00:02:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:05.236 { 00:16:05.236 "cntlid": 133, 00:16:05.236 "qid": 0, 00:16:05.236 "state": "enabled", 00:16:05.236 "listen_address": { 00:16:05.236 "trtype": "RDMA", 00:16:05.236 "adrfam": "IPv4", 00:16:05.236 "traddr": "192.168.100.8", 00:16:05.236 "trsvcid": "4420" 00:16:05.236 }, 00:16:05.236 "peer_address": { 00:16:05.236 "trtype": "RDMA", 00:16:05.236 "adrfam": "IPv4", 00:16:05.236 "traddr": "192.168.100.8", 00:16:05.236 "trsvcid": "48170" 00:16:05.236 }, 00:16:05.236 "auth": { 00:16:05.236 "state": "completed", 00:16:05.236 "digest": "sha512", 00:16:05.236 "dhgroup": "ffdhe6144" 00:16:05.236 } 00:16:05.236 } 00:16:05.236 ]' 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.236 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.493 00:02:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:16:06.864 00:02:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.864 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.122 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.700 00:16:07.700 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:07.700 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:07.700 00:02:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:07.958 { 00:16:07.958 "cntlid": 135, 00:16:07.958 "qid": 0, 00:16:07.958 "state": "enabled", 00:16:07.958 "listen_address": { 00:16:07.958 "trtype": "RDMA", 00:16:07.958 "adrfam": "IPv4", 00:16:07.958 "traddr": "192.168.100.8", 00:16:07.958 "trsvcid": "4420" 00:16:07.958 }, 00:16:07.958 "peer_address": { 00:16:07.958 "trtype": "RDMA", 00:16:07.958 "adrfam": "IPv4", 00:16:07.958 "traddr": "192.168.100.8", 00:16:07.958 "trsvcid": "55927" 00:16:07.958 }, 00:16:07.958 "auth": { 00:16:07.958 "state": "completed", 00:16:07.958 "digest": "sha512", 00:16:07.958 "dhgroup": "ffdhe6144" 00:16:07.958 } 00:16:07.958 } 00:16:07.958 ]' 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.958 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:08.216 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.216 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:08.216 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.216 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.216 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.473 00:02:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:16:09.845 00:02:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.845 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.103 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:10.104 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.104 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.104 00:02:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.104 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:10.104 00:02:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:11.037 00:16:11.037 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:11.037 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.037 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:11.294 { 00:16:11.294 "cntlid": 137, 00:16:11.294 "qid": 0, 00:16:11.294 "state": "enabled", 00:16:11.294 "listen_address": { 00:16:11.294 "trtype": "RDMA", 00:16:11.294 "adrfam": "IPv4", 00:16:11.294 "traddr": "192.168.100.8", 00:16:11.294 "trsvcid": "4420" 00:16:11.294 }, 00:16:11.294 "peer_address": { 00:16:11.294 "trtype": "RDMA", 00:16:11.294 "adrfam": "IPv4", 00:16:11.294 "traddr": "192.168.100.8", 00:16:11.294 "trsvcid": "39987" 00:16:11.294 }, 00:16:11.294 "auth": { 00:16:11.294 "state": "completed", 00:16:11.294 "digest": "sha512", 00:16:11.294 "dhgroup": "ffdhe8192" 00:16:11.294 } 00:16:11.294 } 00:16:11.294 ]' 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.294 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.551 00:02:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.923 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:12.924 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:12.924 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:13.504 00:02:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:14.456 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:14.456 { 00:16:14.456 "cntlid": 139, 00:16:14.456 "qid": 0, 00:16:14.456 "state": "enabled", 00:16:14.456 "listen_address": { 00:16:14.456 "trtype": "RDMA", 00:16:14.456 "adrfam": "IPv4", 00:16:14.456 "traddr": "192.168.100.8", 00:16:14.456 "trsvcid": "4420" 00:16:14.456 }, 00:16:14.456 "peer_address": { 00:16:14.456 "trtype": "RDMA", 00:16:14.456 "adrfam": "IPv4", 00:16:14.456 "traddr": "192.168.100.8", 00:16:14.456 "trsvcid": "43801" 00:16:14.456 }, 00:16:14.456 "auth": { 00:16:14.456 "state": "completed", 00:16:14.456 "digest": "sha512", 00:16:14.456 "dhgroup": "ffdhe8192" 00:16:14.456 } 00:16:14.456 } 00:16:14.456 ]' 00:16:14.456 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.714 00:02:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.971 00:02:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2JjMDcwYjBjMTcyNzVhMzZiMmRjMjBiZmRiNDQzOTfmkbli: 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.346 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.604 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:16:16.604 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:16.604 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.604 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:16.605 00:02:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:17.538 00:16:17.538 00:02:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:17.538 00:02:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:17.538 00:02:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:17.796 { 00:16:17.796 "cntlid": 141, 00:16:17.796 "qid": 0, 00:16:17.796 "state": "enabled", 00:16:17.796 "listen_address": { 00:16:17.796 "trtype": "RDMA", 00:16:17.796 "adrfam": "IPv4", 00:16:17.796 "traddr": "192.168.100.8", 00:16:17.796 "trsvcid": "4420" 00:16:17.796 }, 00:16:17.796 "peer_address": { 00:16:17.796 "trtype": "RDMA", 00:16:17.796 "adrfam": "IPv4", 00:16:17.796 "traddr": "192.168.100.8", 00:16:17.796 "trsvcid": "53560" 00:16:17.796 }, 00:16:17.796 "auth": { 00:16:17.796 "state": "completed", 00:16:17.796 "digest": "sha512", 00:16:17.796 "dhgroup": "ffdhe8192" 00:16:17.796 } 00:16:17.796 } 00:16:17.796 ]' 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.796 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:18.055 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.055 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:18.055 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.055 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.055 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.312 00:02:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDA1NjQ3MTg1Y2MzZjM3YjA1NmFiNzE3MmJlNjU4NzAzYTA5ODZiNjNhOGJhYzRkXMwLMQ==: 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.685 00:02:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.943 00:02:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.876 00:16:20.876 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:20.876 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.876 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:21.134 { 00:16:21.134 "cntlid": 143, 00:16:21.134 "qid": 0, 00:16:21.134 "state": "enabled", 00:16:21.134 "listen_address": { 00:16:21.134 "trtype": "RDMA", 00:16:21.134 "adrfam": "IPv4", 00:16:21.134 "traddr": "192.168.100.8", 00:16:21.134 "trsvcid": "4420" 00:16:21.134 }, 00:16:21.134 "peer_address": { 00:16:21.134 "trtype": "RDMA", 00:16:21.134 "adrfam": "IPv4", 00:16:21.134 "traddr": "192.168.100.8", 00:16:21.134 "trsvcid": "52993" 00:16:21.134 }, 00:16:21.134 "auth": { 00:16:21.134 "state": "completed", 00:16:21.134 "digest": "sha512", 00:16:21.134 "dhgroup": "ffdhe8192" 00:16:21.134 } 00:16:21.134 } 00:16:21.134 ]' 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.134 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.392 00:02:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDY1ODg3ZGU0Zjg5NDE3MjBmNWJlY2E3NWVhM2FhODdhNDQxOGRkYmYxNjc5OTA3NzQwYjIwMWFmYTNkN2E2NDRUc0E=: 00:16:22.764 00:02:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:16:22.764 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:16:22.765 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:22.765 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:22.765 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.022 00:02:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.954 00:16:23.954 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.954 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.954 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:24.211 { 00:16:24.211 "cntlid": 145, 00:16:24.211 "qid": 0, 00:16:24.211 "state": "enabled", 00:16:24.211 "listen_address": { 00:16:24.211 "trtype": "RDMA", 00:16:24.211 "adrfam": "IPv4", 00:16:24.211 "traddr": "192.168.100.8", 00:16:24.211 "trsvcid": "4420" 00:16:24.211 }, 00:16:24.211 "peer_address": { 00:16:24.211 "trtype": "RDMA", 00:16:24.211 "adrfam": "IPv4", 00:16:24.211 "traddr": "192.168.100.8", 00:16:24.211 "trsvcid": "38860" 00:16:24.211 }, 00:16:24.211 "auth": { 00:16:24.211 "state": "completed", 00:16:24.211 "digest": "sha512", 00:16:24.211 "dhgroup": "ffdhe8192" 00:16:24.211 } 00:16:24.211 } 00:16:24.211 ]' 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.211 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:24.469 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.469 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.469 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.469 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.469 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.726 00:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmUyYmI1OTBmMmU0ODc1YmNjYjEwNGUwOWZkNjY0ODViNTUwN2E2Njk1N2YzYTE1lw+qqQ==: 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.098 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.099 00:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.223 request: 00:16:58.224 { 00:16:58.224 "name": "nvme0", 00:16:58.224 "trtype": "rdma", 00:16:58.224 "traddr": "192.168.100.8", 00:16:58.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:58.224 "adrfam": "ipv4", 00:16:58.224 "trsvcid": "4420", 00:16:58.224 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.224 "dhchap_key": "key2", 00:16:58.224 "method": "bdev_nvme_attach_controller", 00:16:58.224 "req_id": 1 00:16:58.224 } 00:16:58.224 Got JSON-RPC error response 00:16:58.224 response: 00:16:58.224 { 00:16:58.224 "code": -32602, 00:16:58.224 "message": "Invalid parameters" 00:16:58.224 } 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 519072 ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 519072' 00:16:58.224 killing process with pid 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 519072 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:58.224 rmmod nvme_rdma 00:16:58.224 rmmod nvme_fabrics 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 518920 ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 518920 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 518920 ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 518920 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518920 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518920' 00:16:58.224 killing process with pid 518920 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 518920 00:16:58.224 00:03:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 518920 00:16:58.224 00:03:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.224 00:03:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:58.224 00:03:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ayq /tmp/spdk.key-sha256.XsA /tmp/spdk.key-sha384.32d /tmp/spdk.key-sha512.yfU /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:16:58.224 00:16:58.224 real 4m5.079s 00:16:58.224 user 9m16.712s 00:16:58.224 sys 0m17.104s 00:16:58.224 00:03:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:58.224 00:03:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.224 ************************************ 00:16:58.224 END TEST nvmf_auth_target 00:16:58.224 ************************************ 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:16:58.224 00:03:27 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:16:58.224 00:03:27 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:58.224 00:03:27 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.224 00:03:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:58.224 ************************************ 00:16:58.224 START TEST nvmf_device_removal 00:16:58.224 ************************************ 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1121 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:16:58.224 * Looking for test storage... 00:16:58.224 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:58.224 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:58.225 #define SPDK_CONFIG_H 00:16:58.225 #define SPDK_CONFIG_APPS 1 00:16:58.225 #define SPDK_CONFIG_ARCH native 00:16:58.225 #undef SPDK_CONFIG_ASAN 00:16:58.225 #undef SPDK_CONFIG_AVAHI 00:16:58.225 #undef SPDK_CONFIG_CET 00:16:58.225 #define SPDK_CONFIG_COVERAGE 1 00:16:58.225 #define SPDK_CONFIG_CROSS_PREFIX 00:16:58.225 #undef SPDK_CONFIG_CRYPTO 00:16:58.225 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:58.225 #undef SPDK_CONFIG_CUSTOMOCF 00:16:58.225 #undef SPDK_CONFIG_DAOS 00:16:58.225 #define SPDK_CONFIG_DAOS_DIR 00:16:58.225 #define SPDK_CONFIG_DEBUG 1 00:16:58.225 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:58.225 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:16:58.225 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:58.225 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:58.225 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:58.225 #undef SPDK_CONFIG_DPDK_UADK 00:16:58.225 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:16:58.225 #define SPDK_CONFIG_EXAMPLES 1 00:16:58.225 #undef SPDK_CONFIG_FC 00:16:58.225 #define SPDK_CONFIG_FC_PATH 00:16:58.225 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:58.225 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:58.225 #undef SPDK_CONFIG_FUSE 00:16:58.225 #undef SPDK_CONFIG_FUZZER 00:16:58.225 #define SPDK_CONFIG_FUZZER_LIB 00:16:58.225 #undef SPDK_CONFIG_GOLANG 00:16:58.225 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:58.225 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:58.225 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:58.225 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:16:58.225 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:58.225 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:58.225 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:58.225 #define SPDK_CONFIG_IDXD 1 00:16:58.225 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:58.225 #undef SPDK_CONFIG_IPSEC_MB 00:16:58.225 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:58.225 #define SPDK_CONFIG_ISAL 1 00:16:58.225 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:58.225 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:58.225 #define SPDK_CONFIG_LIBDIR 00:16:58.225 #undef SPDK_CONFIG_LTO 00:16:58.225 #define SPDK_CONFIG_MAX_LCORES 00:16:58.225 #define SPDK_CONFIG_NVME_CUSE 1 00:16:58.225 #undef SPDK_CONFIG_OCF 00:16:58.225 #define SPDK_CONFIG_OCF_PATH 00:16:58.225 #define SPDK_CONFIG_OPENSSL_PATH 00:16:58.225 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:58.225 #define SPDK_CONFIG_PGO_DIR 00:16:58.225 #undef SPDK_CONFIG_PGO_USE 00:16:58.225 #define SPDK_CONFIG_PREFIX /usr/local 00:16:58.225 #undef SPDK_CONFIG_RAID5F 00:16:58.225 #undef SPDK_CONFIG_RBD 00:16:58.225 #define SPDK_CONFIG_RDMA 1 00:16:58.225 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:58.225 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:58.225 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:58.225 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:58.225 #define SPDK_CONFIG_SHARED 1 00:16:58.225 #undef SPDK_CONFIG_SMA 00:16:58.225 #define SPDK_CONFIG_TESTS 1 00:16:58.225 #undef SPDK_CONFIG_TSAN 00:16:58.225 #define SPDK_CONFIG_UBLK 1 00:16:58.225 #define SPDK_CONFIG_UBSAN 1 00:16:58.225 #undef SPDK_CONFIG_UNIT_TESTS 00:16:58.225 #undef SPDK_CONFIG_URING 00:16:58.225 #define SPDK_CONFIG_URING_PATH 00:16:58.225 #undef SPDK_CONFIG_URING_ZNS 00:16:58.225 #undef SPDK_CONFIG_USDT 00:16:58.225 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:58.225 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:58.225 #undef SPDK_CONFIG_VFIO_USER 00:16:58.225 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:58.225 #define SPDK_CONFIG_VHOST 1 00:16:58.225 #define SPDK_CONFIG_VIRTIO 1 00:16:58.225 #undef SPDK_CONFIG_VTUNE 00:16:58.225 #define SPDK_CONFIG_VTUNE_DIR 00:16:58.225 #define SPDK_CONFIG_WERROR 1 00:16:58.225 #define SPDK_CONFIG_WPDK_DIR 00:16:58.225 #undef SPDK_CONFIG_XNVME 00:16:58.225 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.225 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@57 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@61 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # : 1 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # : 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # : 1 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # : 1 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # : rdma 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # : 1 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # : 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # : 0 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:16:58.226 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # : 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # : true 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # : mlx5 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@166 -- # : 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # : 0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # cat 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # export valgrind= 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # valgrind= 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # uname -s 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@278 -- # MAKE=make 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # TEST_MODE= 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # for i in "$@" 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # case "$i" in 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # [[ -z 549552 ]] 00:16:58.227 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # kill -0 549552 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local mount target_dir 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.MrMF2T 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MrMF2T/tests/target /tmp/spdk.MrMF2T 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # df -T 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=968667136 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4315762688 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=48345690112 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=13649039360 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=30934978560 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=62386176 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=12376326144 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=22622208 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=30995857408 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=1507328 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:16:58.228 * Looking for test storage... 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@367 -- # local target_space new_size 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # mount=/ 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@373 -- # target_space=48345690112 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # new_size=15863631872 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.228 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # return 0 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1678 -- # set -o errtrace 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # true 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # xtrace_fd 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:16:58.228 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.229 00:03:27 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:00.763 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:17:00.764 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:17:00.764 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:17:00.764 Found net devices under 0000:09:00.0: mlx_0_0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:17:00.764 Found net devices under 0000:09:00.1: mlx_0_1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:00.764 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.764 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:17:00.764 altname enp9s0f0np0 00:17:00.764 inet 192.168.100.8/24 scope global mlx_0_0 00:17:00.764 valid_lft forever preferred_lft forever 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:00.764 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.764 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:17:00.764 altname enp9s0f1np1 00:17:00.764 inet 192.168.100.9/24 scope global mlx_0_1 00:17:00.764 valid_lft forever preferred_lft forever 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.764 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:00.765 192.168.100.9' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:00.765 192.168.100.9' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:00.765 192.168.100.9' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:17:00.765 ************************************ 00:17:00.765 START TEST nvmf_device_removal_pci_remove_no_srq 00:17:00.765 ************************************ 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1121 -- # test_remove_and_rescan --no-srq 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=551498 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 551498 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 551498 ']' 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.765 00:03:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:00.765 [2024-05-15 00:03:29.939818] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:17:00.765 [2024-05-15 00:03:29.939903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.765 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.765 [2024-05-15 00:03:30.018618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:01.024 [2024-05-15 00:03:30.140433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.024 [2024-05-15 00:03:30.140491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.024 [2024-05-15 00:03:30.140505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.024 [2024-05-15 00:03:30.140516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.024 [2024-05-15 00:03:30.140536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.024 [2024-05-15 00:03:30.140617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.024 [2024-05-15 00:03:30.140622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.024 [2024-05-15 00:03:30.312037] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9eb3d0/0x9ef8c0) succeed. 00:17:01.024 [2024-05-15 00:03:30.323836] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9ec8d0/0xa30f50) succeed. 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.024 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 [2024-05-15 00:03:30.431156] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:01.283 [2024-05-15 00:03:30.431527] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 [2024-05-15 00:03:30.510301] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=551667 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 551667 /var/tmp/bdevperf.sock 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 551667 ']' 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.283 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.284 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.284 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.542 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.800 Nvme_mlx_0_0n1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.800 00:03:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:01.800 Nvme_mlx_0_1n1 00:17:01.800 00:03:31 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.800 00:03:31 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=551681 00:17:01.800 00:03:31 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:17:01.800 00:03:31 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.099 mlx5_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:07.099 00:03:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:17:07.099 [2024-05-15 00:03:36.164750] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:17:07.099 [2024-05-15 00:03:36.164936] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:07.099 [2024-05-15 00:03:36.165061] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:07.099 [2024-05-15 00:03:36.165085] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 94 00:17:07.099 [2024-05-15 00:03:36.165097] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:17:07.099 [2024-05-15 00:03:36.165108] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165119] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165130] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165140] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165150] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165160] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165171] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165181] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165192] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165202] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165228] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.099 [2024-05-15 00:03:36.165238] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.099 [2024-05-15 00:03:36.165249] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165259] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165269] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165279] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165290] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165300] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165310] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165330] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165340] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165350] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165360] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165371] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165381] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165390] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165410] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165420] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165430] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165440] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165450] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165460] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165470] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165479] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165490] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165500] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165510] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165520] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165530] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165540] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165550] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165560] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165570] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165580] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165591] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165601] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165611] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165621] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165631] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165641] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165651] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165661] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165670] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165680] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165690] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165701] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165720] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165730] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165743] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165754] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165764] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165773] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165784] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165793] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165803] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165813] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165823] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165844] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165854] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165874] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165884] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165893] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165903] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165937] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165951] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165973] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.165984] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.165994] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166004] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166014] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166024] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166034] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166044] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166053] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166064] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166074] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166085] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166094] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166105] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166115] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166125] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166135] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166145] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166155] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166165] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166176] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166186] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166196] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166225] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166236] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166245] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166255] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166265] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166275] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166285] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166295] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166305] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166314] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166324] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166333] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166343] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166353] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166363] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166373] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166382] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166392] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166403] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166412] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166422] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166441] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166451] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166461] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166471] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166481] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166490] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166500] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.100 [2024-05-15 00:03:36.166510] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.100 [2024-05-15 00:03:36.166520] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166529] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166539] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166548] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166558] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166568] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166577] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166587] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166597] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166607] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166617] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166627] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166639] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166650] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166659] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166669] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166678] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166688] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166697] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166707] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166717] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166726] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166736] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166745] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166756] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166765] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166775] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166785] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166795] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166816] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166825] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166835] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166845] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166855] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166864] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166874] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166883] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166893] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166903] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166927] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166946] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166957] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166967] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166977] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.166987] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.166997] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.167007] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.167018] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.167027] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.167037] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.167047] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.167057] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.167068] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:07.101 [2024-05-15 00:03:36.167081] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:07.101 [2024-05-15 00:03:36.167092] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:17:10.383 00:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:17:11.315 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:17:11.316 00:03:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:17:11.316 [2024-05-15 00:03:40.605292] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9ec760/0x9ef8c0) succeed. 00:17:11.316 [2024-05-15 00:03:40.605390] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:12.247 [2024-05-15 00:03:41.546323] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:12.247 [2024-05-15 00:03:41.546364] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:17:12.247 [2024-05-15 00:03:41.546386] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:12.247 [2024-05-15 00:03:41.546406] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:12.247 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.505 mlx5_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:12.505 00:03:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:17:12.505 [2024-05-15 00:03:41.639860] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:17:12.506 [2024-05-15 00:03:41.639989] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:12.506 [2024-05-15 00:03:41.644315] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:12.506 [2024-05-15 00:03:41.644340] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 66 00:17:12.506 [2024-05-15 00:03:41.644368] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:17:12.506 [2024-05-15 00:03:41.644379] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644390] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644401] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644411] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644421] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644441] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644451] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644461] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644488] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644498] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644508] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644517] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644527] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644552] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644561] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644571] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644581] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644590] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644615] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644635] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644646] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644657] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644683] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644693] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644703] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644713] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644723] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644733] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644744] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644753] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644764] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644779] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644790] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644801] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644811] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644821] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644830] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644840] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644851] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644861] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644871] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644881] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644891] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644902] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644913] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644926] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644947] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644957] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.644967] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.644978] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.644988] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.644998] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645008] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645017] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645028] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645038] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645048] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645057] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645067] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645077] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645087] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645097] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645107] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645119] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645130] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645140] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645150] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645160] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645171] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645181] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645191] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645201] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645211] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645232] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645243] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645254] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645265] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645275] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645285] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645295] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645305] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645315] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645325] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645335] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645345] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645356] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645366] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645376] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645386] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645396] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645406] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645441] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645451] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645461] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645470] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645480] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645490] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645500] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645509] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645525] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645540] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645556] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645567] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645581] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645591] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645605] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.506 [2024-05-15 00:03:41.645619] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.506 [2024-05-15 00:03:41.645633] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645646] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645663] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.506 [2024-05-15 00:03:41.645688] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.506 [2024-05-15 00:03:41.645699] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.507 [2024-05-15 00:03:41.645710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.507 [2024-05-15 00:03:41.645724] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.507 [2024-05-15 00:03:41.645734] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.507 [2024-05-15 00:03:41.645745] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.507 [2024-05-15 00:03:41.645755] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.507 [2024-05-15 00:03:41.645765] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.507 [2024-05-15 00:03:41.645775] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:12.507 [2024-05-15 00:03:41.645785] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.507 [2024-05-15 00:03:41.645795] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.507 [2024-05-15 00:03:41.645805] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.507 [2024-05-15 00:03:41.645815] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.507 [2024-05-15 00:03:41.645825] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.507 [2024-05-15 00:03:41.645835] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.507 [2024-05-15 00:03:41.645845] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:17:12.507 [2024-05-15 00:03:41.645855] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:17:12.507 [2024-05-15 00:03:41.645865] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:17:12.507 [2024-05-15 00:03:41.645875] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:16.684 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:17:16.684 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:17:16.684 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:17:16.685 00:03:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:17:16.942 [2024-05-15 00:03:46.075888] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xc23d70, err 11. Skip rescan. 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:17:16.942 00:03:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:17:16.942 [2024-05-15 00:03:46.183462] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbf4220/0xa30f50) succeed. 00:17:16.942 [2024-05-15 00:03:46.183556] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:17:17.875 [2024-05-15 00:03:47.084396] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:17:17.875 [2024-05-15 00:03:47.084454] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:17:17.875 [2024-05-15 00:03:47.084475] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:17.875 [2024-05-15 00:03:47.084489] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:17:17.875 00:03:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 551681 00:18:39.306 0 00:18:39.306 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 551667 ']' 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 551667' 00:18:39.307 killing process with pid 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 551667 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:18:39.307 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:39.307 [2024-05-15 00:03:30.555800] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:18:39.307 [2024-05-15 00:03:30.555887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551667 ] 00:18:39.307 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.307 [2024-05-15 00:03:30.628925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.307 [2024-05-15 00:03:30.737121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.307 Running I/O for 90 seconds... 00:18:39.307 [2024-05-15 00:03:36.159824] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.307 [2024-05-15 00:03:36.159889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.307 [2024-05-15 00:03:36.159908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.307 [2024-05-15 00:03:36.159925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.307 [2024-05-15 00:03:36.159963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.307 [2024-05-15 00:03:36.159989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.307 [2024-05-15 00:03:36.160003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.307 [2024-05-15 00:03:36.160017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.307 [2024-05-15 00:03:36.160031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.307 [2024-05-15 00:03:36.161392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.307 [2024-05-15 00:03:36.161417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:39.307 [2024-05-15 00:03:36.161476] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.307 [2024-05-15 00:03:36.169813] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.179832] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.191789] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.202161] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.212422] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.222447] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.232476] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.242603] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.252977] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.263491] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.273517] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.284254] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.294280] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.304307] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.314336] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.324380] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.334559] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.345094] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.355539] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.365620] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.375685] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.385708] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.395735] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.406020] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.416471] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.426928] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.437218] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.447533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.457560] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.467588] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.477615] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.487642] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.498096] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.508676] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.518996] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.529055] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.541511] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.551833] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.562725] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.572752] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.582777] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.592802] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.605056] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.616139] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.626661] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.636686] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.646712] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.656740] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.666768] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.676882] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.686923] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.697421] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.708951] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.718975] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.729175] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.739256] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.749265] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.759293] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.769321] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.779349] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.789384] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.799739] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.307 [2024-05-15 00:03:36.810681] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.820777] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.830794] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.841473] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.851498] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.862336] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.872361] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.882493] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.892941] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.902949] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.914020] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.924078] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.934101] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.944818] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.954962] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.964988] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.975462] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.985499] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:36.995760] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.006125] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.016148] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.026892] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.037160] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.047363] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.057387] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.068205] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.078427] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.088451] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.099128] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.109159] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.119183] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.129209] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.139394] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.149707] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.160173] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.308 [2024-05-15 00:03:37.164367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.164982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.164998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.165011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.165026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.165040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.165056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.165069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.165085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x184a00 00:18:39.308 [2024-05-15 00:03:37.165099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.308 [2024-05-15 00:03:37.165114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.165979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.165994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.309 [2024-05-15 00:03:37.166237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x184a00 00:18:39.309 [2024-05-15 00:03:37.166250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.166979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.166993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.310 [2024-05-15 00:03:37.167370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x184a00 00:18:39.310 [2024-05-15 00:03:37.167384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x184a00 00:18:39.311 [2024-05-15 00:03:37.167414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x184a00 00:18:39.311 [2024-05-15 00:03:37.167443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x184a00 00:18:39.311 [2024-05-15 00:03:37.167472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x184a00 00:18:39.311 [2024-05-15 00:03:37.167501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.167977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.167990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.168005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.168020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.168035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.168063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.168077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.168092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.168107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.311 [2024-05-15 00:03:37.168122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.311 [2024-05-15 00:03:37.168136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.168151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.312 [2024-05-15 00:03:37.168164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.168179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.312 [2024-05-15 00:03:37.168193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.168208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.312 [2024-05-15 00:03:37.168223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.312 [2024-05-15 00:03:37.168255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.183619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:39.312 [2024-05-15 00:03:37.183643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:39.312 [2024-05-15 00:03:37.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126152 len:8 PRP1 0x0 PRP2 0x0 00:18:39.312 [2024-05-15 00:03:37.183686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:37.185591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:39.312 [2024-05-15 00:03:37.185955] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:39.312 [2024-05-15 00:03:37.185980] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.312 [2024-05-15 00:03:37.186007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:18:39.312 [2024-05-15 00:03:37.186034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.312 [2024-05-15 00:03:37.186064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:39.312 [2024-05-15 00:03:37.186131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:39.312 [2024-05-15 00:03:37.186152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:39.312 [2024-05-15 00:03:37.186167] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:39.312 [2024-05-15 00:03:37.186198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.312 [2024-05-15 00:03:37.186222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:39.312 [2024-05-15 00:03:39.192583] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.312 [2024-05-15 00:03:39.192638] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:18:39.312 [2024-05-15 00:03:39.192693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.312 [2024-05-15 00:03:39.192710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:39.312 [2024-05-15 00:03:39.192757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:39.312 [2024-05-15 00:03:39.192778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:39.312 [2024-05-15 00:03:39.192793] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:39.312 [2024-05-15 00:03:39.192842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.312 [2024-05-15 00:03:39.192861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:39.312 [2024-05-15 00:03:41.197843] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.312 [2024-05-15 00:03:41.197886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:18:39.312 [2024-05-15 00:03:41.197925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.312 [2024-05-15 00:03:41.197953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:39.312 [2024-05-15 00:03:41.197975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:39.312 [2024-05-15 00:03:41.197997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:39.312 [2024-05-15 00:03:41.198012] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:39.312 [2024-05-15 00:03:41.198048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.312 [2024-05-15 00:03:41.198066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:39.312 [2024-05-15 00:03:41.645543] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.312 [2024-05-15 00:03:41.645582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.312 [2024-05-15 00:03:41.645600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:41.645616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.312 [2024-05-15 00:03:41.645629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:41.645643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.312 [2024-05-15 00:03:41.645656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:41.645684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.312 [2024-05-15 00:03:41.645699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:18:39.312 [2024-05-15 00:03:41.650346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.312 [2024-05-15 00:03:41.650374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:39.312 [2024-05-15 00:03:41.650413] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.312 [2024-05-15 00:03:41.655546] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.665572] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.675599] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.685625] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.695650] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.705675] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.715700] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.725725] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.735750] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.745776] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.755802] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.765830] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.775857] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.785885] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.795928] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.805938] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.815963] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.825991] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.836016] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.846041] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.856070] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.866096] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.876124] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.886152] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.896180] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.906206] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.916248] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.926261] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.936287] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.946313] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.956340] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.966369] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.976395] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.986424] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:41.996450] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:42.006477] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:42.016502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:42.026529] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.312 [2024-05-15 00:03:42.036555] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.046582] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.056606] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.066634] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.076662] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.086687] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.096714] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.106741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.116768] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.126795] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.136821] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.146849] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.156874] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.166902] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.176927] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.186961] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.196986] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.207017] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.225220] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.235218] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.238486] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.313 [2024-05-15 00:03:42.245243] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.255261] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.265290] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.275317] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.285341] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.295367] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.305393] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.315418] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.325444] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.335469] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.345498] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.355525] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.365550] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.375578] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.385606] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.395632] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.405658] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.415684] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.425711] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.435737] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.445762] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.455790] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.465815] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.475841] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.485868] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.495895] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.505922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.515946] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.525975] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.536002] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.546027] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.556054] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.566080] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.576106] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.586131] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.596158] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.606187] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.616213] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.626241] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.636271] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.646295] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:39.313 [2024-05-15 00:03:42.652891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:222944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.652943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.652977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:222952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.652993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:222960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:222968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:222976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:222984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:222992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:223000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:223008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:223016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:223024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:223032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.313 [2024-05-15 00:03:42.653310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:223040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1c2d00 00:18:39.313 [2024-05-15 00:03:42.653324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:223048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:223056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:223064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:223072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:223080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:223088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:223096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:223104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:223112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:223120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:223128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:223136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:223144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:223152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:223160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:223168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:223176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:223184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:223192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:223200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:223208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:223216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.653997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:223224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1c2d00 00:18:39.314 [2024-05-15 00:03:42.654010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:223232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:223240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:223248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:223256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:223264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:223272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:223280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:223288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:223296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:223304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:223312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:223320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:223328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:223336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:223344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.314 [2024-05-15 00:03:42.654444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.314 [2024-05-15 00:03:42.654458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:223352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:223360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:223368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:223376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:223384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:223392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:223400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:223408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:223416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:223424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:223432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:223440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:223448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:223456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:223464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:223472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:223480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.654984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:223488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:223496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:223504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:223512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:223520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:223528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:223536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:223544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:223552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:223560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:223568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:223576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:223584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:223592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:223600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:223608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:223616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.315 [2024-05-15 00:03:42.655517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:223624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.315 [2024-05-15 00:03:42.655531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:223632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:223640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:223648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:223656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:223664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:223672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:223680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:223688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:223696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:223704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:223712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:223720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:223728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:223736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:223744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.655974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:223752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:223760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:223768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:223776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:223784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:223792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:223800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:223808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:223816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:223824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:223832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:223840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:223848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:223856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:223864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:223872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:223880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:223888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:223896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:223904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:223912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:223920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:223928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:223936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.316 [2024-05-15 00:03:42.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:223944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.316 [2024-05-15 00:03:42.656721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.317 [2024-05-15 00:03:42.656736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:223952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.317 [2024-05-15 00:03:42.656749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:b15627a0 sqhd:bb93 p:0 m:0 dnr:0 00:18:39.317 [2024-05-15 00:03:42.672274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:39.317 [2024-05-15 00:03:42.672297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:39.317 [2024-05-15 00:03:42.672310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:223960 len:8 PRP1 0x0 PRP2 0x0 00:18:39.317 [2024-05-15 00:03:42.672340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.317 [2024-05-15 00:03:42.672400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:39.317 [2024-05-15 00:03:42.672699] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:39.317 [2024-05-15 00:03:42.672723] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.317 [2024-05-15 00:03:42.672735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:18:39.317 [2024-05-15 00:03:42.672759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.317 [2024-05-15 00:03:42.672775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:39.317 [2024-05-15 00:03:42.672794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:39.317 [2024-05-15 00:03:42.672813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:39.317 [2024-05-15 00:03:42.672827] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:39.317 [2024-05-15 00:03:42.672853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.317 [2024-05-15 00:03:42.672869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:39.317 [2024-05-15 00:03:44.679598] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.317 [2024-05-15 00:03:44.679663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:18:39.317 [2024-05-15 00:03:44.679717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.317 [2024-05-15 00:03:44.679735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:39.317 [2024-05-15 00:03:44.679756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:39.317 [2024-05-15 00:03:44.679772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:39.317 [2024-05-15 00:03:44.679786] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:39.317 [2024-05-15 00:03:44.679825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.317 [2024-05-15 00:03:44.679843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:39.317 [2024-05-15 00:03:46.684829] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:39.317 [2024-05-15 00:03:46.684884] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:18:39.317 [2024-05-15 00:03:46.684924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:39.317 [2024-05-15 00:03:46.684952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:39.317 [2024-05-15 00:03:46.684974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:39.317 [2024-05-15 00:03:46.684989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:39.317 [2024-05-15 00:03:46.685004] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:39.317 [2024-05-15 00:03:46.685045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.317 [2024-05-15 00:03:46.685063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:39.317 [2024-05-15 00:03:47.749854] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.317 00:18:39.317 Latency(us) 00:18:39.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.317 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.317 Verification LBA range: start 0x0 length 0x8000 00:18:39.317 Nvme_mlx_0_0n1 : 90.01 9265.82 36.19 0.00 0.00 13789.20 2888.44 7058858.29 00:18:39.317 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.317 Verification LBA range: start 0x0 length 0x8000 00:18:39.317 Nvme_mlx_0_1n1 : 90.01 8856.53 34.60 0.00 0.00 14426.12 2694.26 7058858.29 00:18:39.317 =================================================================================================================== 00:18:39.317 Total : 18122.35 70.79 0.00 0.00 14100.47 2694.26 7058858.29 00:18:39.317 Received shutdown signal, test time was about 90.000000 seconds 00:18:39.317 00:18:39.317 Latency(us) 00:18:39.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.317 =================================================================================================================== 00:18:39.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 551498 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 551498 ']' 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 551498 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 551498 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 551498' 00:18:39.317 killing process with pid 551498 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 551498 00:18:39.317 [2024-05-15 00:05:01.745590] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:39.317 00:05:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 551498 00:18:39.317 [2024-05-15 00:05:01.783134] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:18:39.317 00:18:39.317 real 1m32.272s 00:18:39.317 user 4m29.232s 00:18:39.317 sys 0m2.484s 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:39.317 ************************************ 00:18:39.317 END TEST nvmf_device_removal_pci_remove_no_srq 00:18:39.317 ************************************ 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:39.317 ************************************ 00:18:39.317 START TEST nvmf_device_removal_pci_remove 00:18:39.317 ************************************ 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1121 -- # test_remove_and_rescan 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=562431 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 562431 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 562431 ']' 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.317 00:05:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.317 [2024-05-15 00:05:02.274826] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:18:39.317 [2024-05-15 00:05:02.274907] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.317 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.317 [2024-05-15 00:05:02.348813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.317 [2024-05-15 00:05:02.462439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.318 [2024-05-15 00:05:02.462500] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.318 [2024-05-15 00:05:02.462515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.318 [2024-05-15 00:05:02.462529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.318 [2024-05-15 00:05:02.462540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.318 [2024-05-15 00:05:02.462613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.318 [2024-05-15 00:05:02.462619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 [2024-05-15 00:05:03.265521] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xced3d0/0xcf18c0) succeed. 00:18:39.318 [2024-05-15 00:05:03.278419] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcee8d0/0xd32f50) succeed. 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 [2024-05-15 00:05:03.456484] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:39.318 [2024-05-15 00:05:03.456826] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.318 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 [2024-05-15 00:05:03.537537] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=562612 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 562612 /var/tmp/bdevperf.sock 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 562612 ']' 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 Nvme_mlx_0_0n1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.319 00:05:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 Nvme_mlx_0_1n1 00:18:39.319 00:05:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.319 00:05:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=562740 00:18:39.319 00:05:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:18:39.319 00:05:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.888 mlx5_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:39.888 00:05:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:39.888 [2024-05-15 00:05:09.172048] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:18:39.888 [2024-05-15 00:05:09.172155] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.888 [2024-05-15 00:05:09.172283] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:39.888 [2024-05-15 00:05:09.172315] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 64 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:18:44.092 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:18:44.093 00:05:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:18:44.689 00:05:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:18:44.689 [2024-05-15 00:05:13.923232] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdbbd90/0xcf18c0) succeed. 00:18:44.689 [2024-05-15 00:05:13.923315] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:45.627 [2024-05-15 00:05:14.861247] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:45.627 [2024-05-15 00:05:14.861291] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:18:45.627 [2024-05-15 00:05:14.861324] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:45.627 [2024-05-15 00:05:14.861354] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:18:45.627 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.628 mlx5_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:45.628 00:05:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:45.628 [2024-05-15 00:05:14.954900] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:18:45.628 [2024-05-15 00:05:14.955037] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:45.628 [2024-05-15 00:05:14.958599] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:45.628 [2024-05-15 00:05:14.958650] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:18:49.814 00:05:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:18:50.074 [2024-05-15 00:05:19.356814] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xdc8d90, err 11. Skip rescan. 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:18:50.334 00:05:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:18:50.334 [2024-05-15 00:05:19.473052] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcf06e0/0xd32f50) succeed. 00:18:50.334 [2024-05-15 00:05:19.473153] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:18:51.275 [2024-05-15 00:05:20.389329] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:51.275 [2024-05-15 00:05:20.389367] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:18:51.275 [2024-05-15 00:05:20.389401] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:51.275 [2024-05-15 00:05:20.389416] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:18:51.275 00:05:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 562740 00:20:12.728 0 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@946 -- # '[' -z 562612 ']' 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@950 -- # kill -0 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # uname 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@964 -- # echo 'killing process with pid 562612' 00:20:12.728 killing process with pid 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # kill 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@970 -- # wait 562612 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:20:12.728 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:12.728 [2024-05-15 00:05:03.589051] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:20:12.728 [2024-05-15 00:05:03.589136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid562612 ] 00:20:12.728 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.728 [2024-05-15 00:05:03.657605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.728 [2024-05-15 00:05:03.764509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.728 Running I/O for 90 seconds... 00:20:12.728 [2024-05-15 00:05:09.165258] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:12.728 [2024-05-15 00:05:09.165323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.728 [2024-05-15 00:05:09.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.728 [2024-05-15 00:05:09.165357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.728 [2024-05-15 00:05:09.165371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.728 [2024-05-15 00:05:09.165386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.728 [2024-05-15 00:05:09.165414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.728 [2024-05-15 00:05:09.165429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.728 [2024-05-15 00:05:09.165442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.728 [2024-05-15 00:05:09.166909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.728 [2024-05-15 00:05:09.166956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:12.728 [2024-05-15 00:05:09.167009] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:12.728 [2024-05-15 00:05:09.175246] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.185284] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.195295] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.205325] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.215895] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.225944] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.236634] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.246987] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.257012] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.267294] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.277472] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.287796] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.298167] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.308191] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.318355] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.328719] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.339044] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.349823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.360455] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.370899] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.381404] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.391479] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.401704] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.412107] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.422134] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.432162] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.442189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.452217] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.462246] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.472271] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.482297] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.492322] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.502351] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.512561] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.522761] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.533060] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.543090] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.553198] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.563595] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.574279] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.584751] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.594925] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.605261] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.615667] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.625692] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.636063] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.647002] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.657026] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.667051] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.677386] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.687412] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.697990] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.708622] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.719111] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.729348] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.739373] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.749630] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.759926] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.770301] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.780922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.791236] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.801544] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.811567] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.822003] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.832028] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.842636] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.853263] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.863834] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.874192] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.884252] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.894595] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.904620] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.915214] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.925544] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.935584] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.945593] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.955864] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.965892] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.976629] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.987384] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:09.998207] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.008284] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.018313] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.028334] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.038811] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.048824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.058849] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.069708] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.080545] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.091234] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.101428] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.729 [2024-05-15 00:05:10.111757] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.121784] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.132462] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.142548] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.153257] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.163780] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.730 [2024-05-15 00:05:10.169490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.169976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.169995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.730 [2024-05-15 00:05:10.170592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.730 [2024-05-15 00:05:10.170608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.170979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.170993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.731 [2024-05-15 00:05:10.171209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.731 [2024-05-15 00:05:10.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1860ef 00:20:12.731 [2024-05-15 00:05:10.171648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.171972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.171986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.732 [2024-05-15 00:05:10.172790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1860ef 00:20:12.732 [2024-05-15 00:05:10.172804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.172837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.172866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.172894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.172949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.172983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.172999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.173419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1860ef 00:20:12.733 [2024-05-15 00:05:10.173433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.188990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:12.733 [2024-05-15 00:05:10.189014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:12.733 [2024-05-15 00:05:10.189028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129600 len:8 PRP1 0x0 PRP2 0x0 00:20:12.733 [2024-05-15 00:05:10.189042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.733 [2024-05-15 00:05:10.191880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:12.733 [2024-05-15 00:05:10.192250] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:12.733 [2024-05-15 00:05:10.192280] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.733 [2024-05-15 00:05:10.192293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:12.733 [2024-05-15 00:05:10.192321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.733 [2024-05-15 00:05:10.192338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:12.733 [2024-05-15 00:05:10.192357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:12.733 [2024-05-15 00:05:10.192373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:12.734 [2024-05-15 00:05:10.192388] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:12.734 [2024-05-15 00:05:10.192419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.734 [2024-05-15 00:05:10.192436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:12.734 [2024-05-15 00:05:12.197571] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.734 [2024-05-15 00:05:12.197628] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:12.734 [2024-05-15 00:05:12.197666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.734 [2024-05-15 00:05:12.197683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:12.734 [2024-05-15 00:05:12.197954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:12.734 [2024-05-15 00:05:12.197975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:12.734 [2024-05-15 00:05:12.197990] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:12.734 [2024-05-15 00:05:12.198040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.734 [2024-05-15 00:05:12.198058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:12.734 [2024-05-15 00:05:14.204271] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.734 [2024-05-15 00:05:14.204321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:12.734 [2024-05-15 00:05:14.204357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.734 [2024-05-15 00:05:14.204374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:12.734 [2024-05-15 00:05:14.204393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:12.734 [2024-05-15 00:05:14.204407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:12.734 [2024-05-15 00:05:14.204420] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:12.734 [2024-05-15 00:05:14.204458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.734 [2024-05-15 00:05:14.204475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:12.734 [2024-05-15 00:05:14.955504] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:12.734 [2024-05-15 00:05:14.955547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.734 [2024-05-15 00:05:14.955572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.734 [2024-05-15 00:05:14.955588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.734 [2024-05-15 00:05:14.955602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.734 [2024-05-15 00:05:14.955616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.734 [2024-05-15 00:05:14.955629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.734 [2024-05-15 00:05:14.955643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.734 [2024-05-15 00:05:14.955656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32685 cdw0:16 sqhd:45dc p:0 m:0 dnr:0 00:20:12.734 [2024-05-15 00:05:14.962723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.734 [2024-05-15 00:05:14.962754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:12.734 [2024-05-15 00:05:14.962826] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:12.734 [2024-05-15 00:05:14.965508] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:14.975545] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:14.985562] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:14.995588] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.005615] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.015641] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.025668] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.035695] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.045720] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.055747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.065772] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.075799] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.085826] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.095852] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.105877] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.115903] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.125932] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.135954] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.145982] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.156007] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.166032] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.176057] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.186084] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.196112] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.206175] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.216205] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.233628] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.243590] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.246799] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.734 [2024-05-15 00:05:15.253615] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.263639] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.273668] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.283694] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.293721] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.303747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.313775] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.323802] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.333829] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.343855] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.353880] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.363905] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.373934] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.383956] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.393984] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.404012] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.414040] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.424065] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.434091] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.444117] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.454143] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.464170] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.474195] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.484222] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.494251] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.504276] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.514301] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.524328] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.534356] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.544382] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.734 [2024-05-15 00:05:15.554409] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.564436] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.574464] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.584491] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.594516] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.604543] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.614570] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.624596] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.634623] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.644649] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.654676] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.664704] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.674730] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.684755] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.694783] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.704809] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.714834] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.724859] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.734887] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.744911] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.754937] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.764964] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.774989] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.785016] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.795041] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.805069] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.815094] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.825122] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.835149] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.845177] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.855203] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.865228] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.875257] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.885281] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.895307] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.905333] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.915357] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.925384] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.935411] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.945436] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.955464] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.735 [2024-05-15 00:05:15.965305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:246984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:246992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:247000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:247008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:247016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:247024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:247032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:247040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:247048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:247056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:247064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:247072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:247080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:247088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:247096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:247104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:247112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:247120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:247128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:247136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:247144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.965968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.965993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:247152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.735 [2024-05-15 00:05:15.966008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.735 [2024-05-15 00:05:15.966024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:247160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:247168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:247176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:247184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:247192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:247200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:247208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:247216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:247224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:247232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:247240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:247248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:247256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:247264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:247272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:247280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:247288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:247296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:247304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:247312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:247320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:247328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:247336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:247344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:247352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:247360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:247368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:247376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:247384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:247392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:247400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.966979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.966994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:247408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:247416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:247424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:247432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:247440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:247448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:247456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:247464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:247472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.736 [2024-05-15 00:05:15.967242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.736 [2024-05-15 00:05:15.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:247480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:247488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:247496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:247504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:247512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:247520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:247528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:247536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:247544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:247552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:247560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:247568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:247576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:247584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:247592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:247600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:247608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:247616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:247624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:247632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:247640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:247648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:247656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:247664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.967965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:247672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.737 [2024-05-15 00:05:15.967987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.737 [2024-05-15 00:05:15.968003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:247680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:247688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:247696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:247704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:247712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:247720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:247728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:247736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:247744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:247752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:247760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:247768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:247776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:247784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:247792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:247800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.738 [2024-05-15 00:05:15.968467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:246784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:246792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:246800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:246808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:246816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:246824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:246832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:246840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:246848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:246856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:246864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:246872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:246880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:246888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:246896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:246904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:246912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.968975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.968992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:246920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.969005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.738 [2024-05-15 00:05:15.969026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:246928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1c20ef 00:20:12.738 [2024-05-15 00:05:15.969040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.969062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:246936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1c20ef 00:20:12.739 [2024-05-15 00:05:15.969076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.969091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:246944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1c20ef 00:20:12.739 [2024-05-15 00:05:15.969108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.969125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:246952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1c20ef 00:20:12.739 [2024-05-15 00:05:15.969138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.969153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:246960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1c20ef 00:20:12.739 [2024-05-15 00:05:15.969167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.969182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:246968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1c20ef 00:20:12.739 [2024-05-15 00:05:15.969195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32685 cdw0:80e5f090 sqhd:bb93 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.984095] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:20:12.739 [2024-05-15 00:05:15.984183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:12.739 [2024-05-15 00:05:15.984202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:12.739 [2024-05-15 00:05:15.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:246976 len:8 PRP1 0x0 PRP2 0x0 00:20:12.739 [2024-05-15 00:05:15.984229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.739 [2024-05-15 00:05:15.984294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:12.739 [2024-05-15 00:05:15.984574] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:12.739 [2024-05-15 00:05:15.984598] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.739 [2024-05-15 00:05:15.984609] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:12.739 [2024-05-15 00:05:15.984636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.739 [2024-05-15 00:05:15.984651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:12.739 [2024-05-15 00:05:15.984670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:12.739 [2024-05-15 00:05:15.984684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:12.739 [2024-05-15 00:05:15.984697] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:12.739 [2024-05-15 00:05:15.984723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.739 [2024-05-15 00:05:15.984738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:12.739 [2024-05-15 00:05:17.990070] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.739 [2024-05-15 00:05:17.990130] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:12.739 [2024-05-15 00:05:17.990169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.739 [2024-05-15 00:05:17.990186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:12.739 [2024-05-15 00:05:17.990258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:12.739 [2024-05-15 00:05:17.990277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:12.739 [2024-05-15 00:05:17.990291] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:12.739 [2024-05-15 00:05:17.990320] bdev_nvme.c:2873:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:20:12.739 [2024-05-15 00:05:17.990359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.739 [2024-05-15 00:05:17.990394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:12.739 [2024-05-15 00:05:18.993321] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.739 [2024-05-15 00:05:18.993383] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:12.739 [2024-05-15 00:05:18.993424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.739 [2024-05-15 00:05:18.993441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:12.739 [2024-05-15 00:05:18.994495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:12.739 [2024-05-15 00:05:18.994520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:12.739 [2024-05-15 00:05:18.994536] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:12.739 [2024-05-15 00:05:18.995496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.739 [2024-05-15 00:05:18.995522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:12.739 [2024-05-15 00:05:21.000764] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:12.739 [2024-05-15 00:05:21.000825] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:12.739 [2024-05-15 00:05:21.000886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.739 [2024-05-15 00:05:21.000903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:12.739 [2024-05-15 00:05:21.001490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:12.739 [2024-05-15 00:05:21.001514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:12.739 [2024-05-15 00:05:21.001530] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:12.739 [2024-05-15 00:05:21.001602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.739 [2024-05-15 00:05:21.001626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:12.739 [2024-05-15 00:05:22.057759] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.739 00:20:12.739 Latency(us) 00:20:12.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.739 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.739 Verification LBA range: start 0x0 length 0x8000 00:20:12.739 Nvme_mlx_0_0n1 : 90.01 9331.77 36.45 0.00 0.00 13692.36 2706.39 7058858.29 00:20:12.739 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.739 Verification LBA range: start 0x0 length 0x8000 00:20:12.739 Nvme_mlx_0_1n1 : 90.01 8761.41 34.22 0.00 0.00 14583.51 2645.71 8053063.68 00:20:12.739 =================================================================================================================== 00:20:12.739 Total : 18093.17 70.68 0.00 0.00 14123.90 2645.71 8053063.68 00:20:12.739 Received shutdown signal, test time was about 90.000000 seconds 00:20:12.739 00:20:12.739 Latency(us) 00:20:12.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.739 =================================================================================================================== 00:20:12.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 562431 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@946 -- # '[' -z 562431 ']' 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@950 -- # kill -0 562431 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # uname 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 562431 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@964 -- # echo 'killing process with pid 562431' 00:20:12.739 killing process with pid 562431 00:20:12.739 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # kill 562431 00:20:12.740 [2024-05-15 00:06:34.754387] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:12.740 00:06:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@970 -- # wait 562431 00:20:12.740 [2024-05-15 00:06:34.817780] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:20:12.740 00:20:12.740 real 1m32.978s 00:20:12.740 user 4m30.551s 00:20:12.740 sys 0m2.755s 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 ************************************ 00:20:12.740 END TEST nvmf_device_removal_pci_remove 00:20:12.740 ************************************ 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:12.740 rmmod nvme_rdma 00:20:12.740 rmmod nvme_fabrics 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:20:12.740 00:20:12.740 real 3m8.131s 00:20:12.740 user 9m0.826s 00:20:12.740 sys 0m7.181s 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.740 00:06:35 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 ************************************ 00:20:12.740 END TEST nvmf_device_removal 00:20:12.740 ************************************ 00:20:12.740 00:06:35 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:12.740 00:06:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:12.740 00:06:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:12.740 00:06:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 ************************************ 00:20:12.740 START TEST nvmf_srq_overwhelm 00:20:12.740 ************************************ 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:12.740 * Looking for test storage... 00:20:12.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.740 00:06:35 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.740 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:20:12.741 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:20:12.741 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:20:12.741 Found net devices under 0000:09:00.0: mlx_0_0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:20:12.741 Found net devices under 0000:09:00.1: mlx_0_1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:12.741 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.741 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:20:12.741 altname enp9s0f0np0 00:20:12.741 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.741 valid_lft forever preferred_lft forever 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:12.741 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.741 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:20:12.741 altname enp9s0f1np1 00:20:12.741 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.741 valid_lft forever preferred_lft forever 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.741 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.742 192.168.100.9' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:12.742 192.168.100.9' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:12.742 192.168.100.9' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=576044 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 576044 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@827 -- # '[' -z 576044 ']' 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:12.742 00:06:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 [2024-05-15 00:06:37.894166] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:20:12.742 [2024-05-15 00:06:37.894256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.742 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.742 [2024-05-15 00:06:37.969522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.742 [2024-05-15 00:06:38.088490] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.742 [2024-05-15 00:06:38.088552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.742 [2024-05-15 00:06:38.088568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.742 [2024-05-15 00:06:38.088581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.742 [2024-05-15 00:06:38.088593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.742 [2024-05-15 00:06:38.088656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.742 [2024-05-15 00:06:38.088710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.742 [2024-05-15 00:06:38.088831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.742 [2024-05-15 00:06:38.088834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # return 0 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 [2024-05-15 00:06:38.259989] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20eca20/0x20f0f10) succeed. 00:20:12.742 [2024-05-15 00:06:38.271160] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ee060/0x21325a0) succeed. 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 Malloc0 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.742 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:12.743 [2024-05-15 00:06:38.370508] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:12.743 [2024-05-15 00:06:38.370864] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.743 00:06:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme0n1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:13.001 Malloc1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.001 00:06:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme1n1 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 Malloc2 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.195 00:06:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme2n1 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:20.481 Malloc3 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.481 00:06:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme3n1 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.669 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:24.670 Malloc4 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.670 00:06:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme4n1 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:27.957 Malloc5 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.957 00:06:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme5n1 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:20:31.244 00:07:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:20:31.244 [global] 00:20:31.244 thread=1 00:20:31.244 invalidate=1 00:20:31.244 rw=read 00:20:31.244 time_based=1 00:20:31.244 runtime=10 00:20:31.244 ioengine=libaio 00:20:31.244 direct=1 00:20:31.244 bs=1048576 00:20:31.244 iodepth=128 00:20:31.244 norandommap=1 00:20:31.244 numjobs=13 00:20:31.244 00:20:31.244 [job0] 00:20:31.244 filename=/dev/nvme0n1 00:20:31.244 [job1] 00:20:31.244 filename=/dev/nvme1n1 00:20:31.244 [job2] 00:20:31.244 filename=/dev/nvme2n1 00:20:31.244 [job3] 00:20:31.244 filename=/dev/nvme3n1 00:20:31.244 [job4] 00:20:31.244 filename=/dev/nvme4n1 00:20:31.244 [job5] 00:20:31.244 filename=/dev/nvme5n1 00:20:31.244 Could not set queue depth (nvme0n1) 00:20:31.244 Could not set queue depth (nvme1n1) 00:20:31.244 Could not set queue depth (nvme2n1) 00:20:31.244 Could not set queue depth (nvme3n1) 00:20:31.244 Could not set queue depth (nvme4n1) 00:20:31.244 Could not set queue depth (nvme5n1) 00:20:31.502 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.502 ... 00:20:31.502 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.503 ... 00:20:31.503 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.503 ... 00:20:31.503 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.503 ... 00:20:31.503 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.503 ... 00:20:31.503 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:31.503 ... 00:20:31.503 fio-3.35 00:20:31.503 Starting 78 threads 00:20:46.373 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578939: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=4, BW=4561KiB/s (4671kB/s)(55.0MiB/12347msec) 00:20:46.373 slat (usec): min=482, max=2113.6k, avg=185306.63, stdev=553452.87 00:20:46.373 clat (msec): min=2154, max=12344, avg=8641.33, stdev=3531.91 00:20:46.373 lat (msec): min=4081, max=12346, avg=8826.64, stdev=3451.70 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 2165], 5.00th=[ 4077], 10.00th=[ 4111], 20.00th=[ 4144], 00:20:46.373 | 30.00th=[ 4212], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:20:46.373 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.373 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.373 | 99.99th=[12281] 00:20:46.373 lat (msec) : >=2000=100.00% 00:20:46.373 cpu : usr=0.01%, sys=0.26%, ctx=113, majf=0, minf=14081 00:20:46.373 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.373 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578940: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=90, BW=90.9MiB/s (95.4MB/s)(1126MiB/12381msec) 00:20:46.373 slat (usec): min=57, max=2074.6k, avg=9128.94, stdev=84785.05 00:20:46.373 clat (msec): min=133, max=4587, avg=1275.61, stdev=1618.66 00:20:46.373 lat (msec): min=133, max=4595, avg=1284.74, stdev=1625.49 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 134], 5.00th=[ 136], 10.00th=[ 155], 20.00th=[ 180], 00:20:46.373 | 30.00th=[ 255], 40.00th=[ 292], 50.00th=[ 359], 60.00th=[ 401], 00:20:46.373 | 70.00th=[ 1301], 80.00th=[ 3239], 90.00th=[ 4396], 95.00th=[ 4463], 00:20:46.373 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:20:46.373 | 99.99th=[ 4597] 00:20:46.373 bw ( KiB/s): min= 1878, max=675840, per=6.35%, avg=170420.67, stdev=221541.24, samples=12 00:20:46.373 iops : min= 1, max= 660, avg=166.17, stdev=216.47, samples=12 00:20:46.373 lat (msec) : 250=28.69%, 500=38.28%, 750=0.36%, 2000=8.17%, >=2000=24.51% 00:20:46.373 cpu : usr=0.06%, sys=1.53%, ctx=1218, majf=0, minf=32769 00:20:46.373 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.373 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578941: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=1, BW=1579KiB/s (1617kB/s)(19.0MiB/12322msec) 00:20:46.373 slat (usec): min=526, max=2150.9k, avg=539549.12, stdev=904923.36 00:20:46.373 clat (msec): min=2069, max=12215, avg=8860.59, stdev=3470.61 00:20:46.373 lat (msec): min=4204, max=12321, avg=9400.14, stdev=3137.08 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 4212], 20.00th=[ 4245], 00:20:46.373 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:20:46.373 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:20:46.373 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.373 | 99.99th=[12281] 00:20:46.373 lat (msec) : >=2000=100.00% 00:20:46.373 cpu : usr=0.00%, sys=0.09%, ctx=44, majf=0, minf=4865 00:20:46.373 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.373 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578942: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=75, BW=75.3MiB/s (79.0MB/s)(931MiB/12360msec) 00:20:46.373 slat (usec): min=56, max=2126.3k, avg=11034.51, stdev=110462.20 00:20:46.373 clat (msec): min=379, max=8715, avg=1637.93, stdev=2637.45 00:20:46.373 lat (msec): min=380, max=8716, avg=1648.97, stdev=2646.07 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 397], 5.00th=[ 468], 10.00th=[ 472], 20.00th=[ 477], 00:20:46.373 | 30.00th=[ 485], 40.00th=[ 510], 50.00th=[ 567], 60.00th=[ 651], 00:20:46.373 | 70.00th=[ 726], 80.00th=[ 818], 90.00th=[ 8154], 95.00th=[ 8490], 00:20:46.373 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.373 | 99.99th=[ 8658] 00:20:46.373 bw ( KiB/s): min= 1965, max=272384, per=5.57%, avg=149646.55, stdev=105470.05, samples=11 00:20:46.373 iops : min= 1, max= 266, avg=146.00, stdev=103.10, samples=11 00:20:46.373 lat (msec) : 500=35.77%, 750=37.16%, 1000=12.78%, >=2000=14.29% 00:20:46.373 cpu : usr=0.07%, sys=1.50%, ctx=647, majf=0, minf=32769 00:20:46.373 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.373 issued rwts: total=931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578943: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(381MiB/10333msec) 00:20:46.373 slat (usec): min=61, max=4299.1k, avg=26955.80, stdev=265239.82 00:20:46.373 clat (msec): min=59, max=9077, avg=3332.94, stdev=3637.78 00:20:46.373 lat (msec): min=460, max=9080, avg=3359.90, stdev=3643.56 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 464], 5.00th=[ 472], 10.00th=[ 493], 20.00th=[ 523], 00:20:46.373 | 30.00th=[ 575], 40.00th=[ 634], 50.00th=[ 718], 60.00th=[ 818], 00:20:46.373 | 70.00th=[ 4732], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:20:46.373 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:20:46.373 | 99.99th=[ 9060] 00:20:46.373 bw ( KiB/s): min= 2043, max=261620, per=3.21%, avg=86269.17, stdev=97975.63, samples=6 00:20:46.373 iops : min= 1, max= 255, avg=84.00, stdev=95.68, samples=6 00:20:46.373 lat (msec) : 100=0.26%, 500=11.81%, 750=39.37%, 1000=8.66%, >=2000=39.90% 00:20:46.373 cpu : usr=0.00%, sys=0.93%, ctx=301, majf=0, minf=32769 00:20:46.373 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.373 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578944: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=6, BW=6563KiB/s (6720kB/s)(66.0MiB/10298msec) 00:20:46.373 slat (usec): min=518, max=2146.5k, avg=155177.34, stdev=522993.10 00:20:46.373 clat (msec): min=55, max=10296, avg=6328.87, stdev=3464.11 00:20:46.373 lat (msec): min=2138, max=10297, avg=6484.04, stdev=3407.71 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 56], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2232], 00:20:46.373 | 30.00th=[ 4396], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 8658], 00:20:46.373 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.373 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.373 | 99.99th=[10268] 00:20:46.373 lat (msec) : 100=1.52%, >=2000=98.48% 00:20:46.373 cpu : usr=0.00%, sys=0.44%, ctx=80, majf=0, minf=16897 00:20:46.373 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:20:46.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.373 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.373 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.373 job0: (groupid=0, jobs=1): err= 0: pid=578946: Wed May 15 00:07:13 2024 00:20:46.373 read: IOPS=2, BW=2080KiB/s (2130kB/s)(25.0MiB/12306msec) 00:20:46.373 slat (usec): min=686, max=4215.2k, avg=409497.09, stdev=1014912.89 00:20:46.373 clat (msec): min=2068, max=12304, avg=9498.99, stdev=3479.87 00:20:46.373 lat (msec): min=4165, max=12305, avg=9908.49, stdev=3156.34 00:20:46.373 clat percentiles (msec): 00:20:46.373 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:20:46.373 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:20:46.373 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.373 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.373 | 99.99th=[12281] 00:20:46.373 lat (msec) : >=2000=100.00% 00:20:46.373 cpu : usr=0.00%, sys=0.11%, ctx=62, majf=0, minf=6401 00:20:46.374 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.374 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578947: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=169, BW=169MiB/s (178MB/s)(2077MiB/12267msec) 00:20:46.374 slat (usec): min=95, max=1056.3k, avg=4901.26, stdev=25428.93 00:20:46.374 clat (msec): min=142, max=4008, avg=732.59, stdev=773.87 00:20:46.374 lat (msec): min=144, max=4011, avg=737.49, stdev=776.24 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 148], 5.00th=[ 169], 10.00th=[ 186], 20.00th=[ 317], 00:20:46.374 | 30.00th=[ 363], 40.00th=[ 401], 50.00th=[ 600], 60.00th=[ 735], 00:20:46.374 | 70.00th=[ 793], 80.00th=[ 860], 90.00th=[ 902], 95.00th=[ 3339], 00:20:46.374 | 99.00th=[ 3876], 99.50th=[ 3977], 99.90th=[ 4010], 99.95th=[ 4010], 00:20:46.374 | 99.99th=[ 4010] 00:20:46.374 bw ( KiB/s): min= 1446, max=628736, per=7.43%, avg=199611.60, stdev=147025.56, samples=20 00:20:46.374 iops : min= 1, max= 614, avg=194.85, stdev=143.61, samples=20 00:20:46.374 lat (msec) : 250=14.20%, 500=31.20%, 750=18.63%, 1000=29.80%, 2000=0.05% 00:20:46.374 lat (msec) : >=2000=6.11% 00:20:46.374 cpu : usr=0.09%, sys=2.84%, ctx=1471, majf=0, minf=32769 00:20:46.374 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.374 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578948: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=2, BW=2910KiB/s (2980kB/s)(35.0MiB/12315msec) 00:20:46.374 slat (usec): min=644, max=4227.3k, avg=292975.64, stdev=877083.64 00:20:46.374 clat (msec): min=2060, max=12312, avg=10549.58, stdev=3159.10 00:20:46.374 lat (msec): min=4165, max=12314, avg=10842.55, stdev=2804.17 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 2056], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6409], 00:20:46.374 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12281], 00:20:46.374 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.374 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.374 | 99.99th=[12281] 00:20:46.374 lat (msec) : >=2000=100.00% 00:20:46.374 cpu : usr=0.00%, sys=0.19%, ctx=54, majf=0, minf=8961 00:20:46.374 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.374 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578949: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=9, BW=9.78MiB/s (10.3MB/s)(121MiB/12375msec) 00:20:46.374 slat (usec): min=433, max=2121.8k, avg=85157.27, stdev=379673.80 00:20:46.374 clat (msec): min=2069, max=12373, avg=9377.40, stdev=2293.14 00:20:46.374 lat (msec): min=4191, max=12374, avg=9462.56, stdev=2209.30 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[ 8221], 00:20:46.374 | 30.00th=[ 8288], 40.00th=[ 8423], 50.00th=[ 8490], 60.00th=[10537], 00:20:46.374 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12416], 95.00th=[12416], 00:20:46.374 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:20:46.374 | 99.99th=[12416] 00:20:46.374 lat (msec) : >=2000=100.00% 00:20:46.374 cpu : usr=0.00%, sys=0.62%, ctx=96, majf=0, minf=30977 00:20:46.374 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.374 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578950: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=40, BW=40.0MiB/s (42.0MB/s)(404MiB/10097msec) 00:20:46.374 slat (usec): min=204, max=2127.3k, avg=24764.98, stdev=159959.31 00:20:46.374 clat (msec): min=87, max=5514, avg=2180.56, stdev=1573.94 00:20:46.374 lat (msec): min=98, max=5546, avg=2205.32, stdev=1582.04 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 108], 5.00th=[ 296], 10.00th=[ 518], 20.00th=[ 919], 00:20:46.374 | 30.00th=[ 1036], 40.00th=[ 1099], 50.00th=[ 1167], 60.00th=[ 3306], 00:20:46.374 | 70.00th=[ 3373], 80.00th=[ 3574], 90.00th=[ 3876], 95.00th=[ 5403], 00:20:46.374 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:20:46.374 | 99.99th=[ 5537] 00:20:46.374 bw ( KiB/s): min=45056, max=133120, per=3.52%, avg=94549.33, stdev=32184.67, samples=6 00:20:46.374 iops : min= 44, max= 130, avg=92.33, stdev=31.43, samples=6 00:20:46.374 lat (msec) : 100=0.74%, 250=3.71%, 500=5.45%, 750=6.93%, 1000=7.43% 00:20:46.374 lat (msec) : 2000=33.66%, >=2000=42.08% 00:20:46.374 cpu : usr=0.02%, sys=1.14%, ctx=985, majf=0, minf=32769 00:20:46.374 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.374 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578952: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=52, BW=53.0MiB/s (55.5MB/s)(535MiB/10103msec) 00:20:46.374 slat (usec): min=82, max=2145.3k, avg=18728.88, stdev=145130.36 00:20:46.374 clat (msec): min=78, max=8743, avg=1563.65, stdev=1386.91 00:20:46.374 lat (msec): min=126, max=8775, avg=1582.38, stdev=1413.81 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 146], 5.00th=[ 317], 10.00th=[ 567], 20.00th=[ 793], 00:20:46.374 | 30.00th=[ 818], 40.00th=[ 869], 50.00th=[ 919], 60.00th=[ 961], 00:20:46.374 | 70.00th=[ 1083], 80.00th=[ 3071], 90.00th=[ 3071], 95.00th=[ 4866], 00:20:46.374 | 99.00th=[ 7282], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:20:46.374 | 99.99th=[ 8792] 00:20:46.374 bw ( KiB/s): min=36864, max=174080, per=4.43%, avg=119076.57, stdev=43467.62, samples=7 00:20:46.374 iops : min= 36, max= 170, avg=116.29, stdev=42.45, samples=7 00:20:46.374 lat (msec) : 100=0.19%, 250=3.36%, 500=5.42%, 750=5.42%, 1000=51.21% 00:20:46.374 lat (msec) : 2000=6.54%, >=2000=27.85% 00:20:46.374 cpu : usr=0.00%, sys=1.11%, ctx=941, majf=0, minf=32769 00:20:46.374 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:46.374 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job0: (groupid=0, jobs=1): err= 0: pid=578953: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=31, BW=31.1MiB/s (32.6MB/s)(384MiB/12354msec) 00:20:46.374 slat (usec): min=59, max=2131.4k, avg=26751.78, stdev=214287.89 00:20:46.374 clat (msec): min=400, max=11018, avg=3950.98, stdev=4647.36 00:20:46.374 lat (msec): min=405, max=11083, avg=3977.74, stdev=4658.24 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 435], 5.00th=[ 460], 10.00th=[ 464], 20.00th=[ 477], 00:20:46.374 | 30.00th=[ 489], 40.00th=[ 550], 50.00th=[ 642], 60.00th=[ 810], 00:20:46.374 | 70.00th=[ 6812], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:20:46.374 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:20:46.374 | 99.99th=[11073] 00:20:46.374 bw ( KiB/s): min= 1984, max=280576, per=2.45%, avg=65784.00, stdev=98336.97, samples=8 00:20:46.374 iops : min= 1, max= 274, avg=64.12, stdev=96.12, samples=8 00:20:46.374 lat (msec) : 500=31.51%, 750=23.44%, 1000=7.81%, >=2000=37.24% 00:20:46.374 cpu : usr=0.00%, sys=1.04%, ctx=260, majf=0, minf=32769 00:20:46.374 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.374 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job1: (groupid=0, jobs=1): err= 0: pid=578979: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=65, BW=65.2MiB/s (68.3MB/s)(804MiB/12336msec) 00:20:46.374 slat (usec): min=61, max=2139.8k, avg=12741.01, stdev=127921.63 00:20:46.374 clat (msec): min=466, max=8990, avg=1899.62, stdev=2928.59 00:20:46.374 lat (msec): min=466, max=8991, avg=1912.36, stdev=2937.49 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 468], 5.00th=[ 472], 10.00th=[ 472], 20.00th=[ 477], 00:20:46.374 | 30.00th=[ 489], 40.00th=[ 506], 50.00th=[ 676], 60.00th=[ 776], 00:20:46.374 | 70.00th=[ 802], 80.00th=[ 860], 90.00th=[ 8658], 95.00th=[ 8792], 00:20:46.374 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:20:46.374 | 99.99th=[ 8926] 00:20:46.374 bw ( KiB/s): min= 1878, max=284672, per=4.69%, avg=125998.73, stdev=110827.37, samples=11 00:20:46.374 iops : min= 1, max= 278, avg=122.82, stdev=108.41, samples=11 00:20:46.374 lat (msec) : 500=37.44%, 750=15.92%, 1000=29.85%, >=2000=16.79% 00:20:46.374 cpu : usr=0.06%, sys=1.11%, ctx=641, majf=0, minf=32393 00:20:46.374 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:20:46.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.374 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.374 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.374 job1: (groupid=0, jobs=1): err= 0: pid=578980: Wed May 15 00:07:13 2024 00:20:46.374 read: IOPS=1, BW=1745KiB/s (1787kB/s)(21.0MiB/12325msec) 00:20:46.374 slat (usec): min=453, max=2142.1k, avg=486774.31, stdev=873001.66 00:20:46.374 clat (msec): min=2101, max=12323, avg=10241.09, stdev=3206.25 00:20:46.374 lat (msec): min=4226, max=12324, avg=10727.86, stdev=2633.61 00:20:46.374 clat percentiles (msec): 00:20:46.374 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:20:46.374 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:20:46.374 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.374 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.374 | 99.99th=[12281] 00:20:46.374 lat (msec) : >=2000=100.00% 00:20:46.374 cpu : usr=0.00%, sys=0.11%, ctx=46, majf=0, minf=5377 00:20:46.374 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.375 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578981: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=2, BW=2826KiB/s (2894kB/s)(34.0MiB/12320msec) 00:20:46.375 slat (usec): min=513, max=2136.5k, avg=301325.04, stdev=718874.76 00:20:46.375 clat (msec): min=2074, max=12319, avg=10822.16, stdev=2807.95 00:20:46.375 lat (msec): min=4149, max=12319, avg=11123.49, stdev=2353.76 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 6342], 20.00th=[ 8490], 00:20:46.375 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:20:46.375 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.375 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 lat (msec) : >=2000=100.00% 00:20:46.375 cpu : usr=0.00%, sys=0.15%, ctx=60, majf=0, minf=8705 00:20:46.375 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.375 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578982: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=15, BW=15.2MiB/s (15.9MB/s)(186MiB/12258msec) 00:20:46.375 slat (usec): min=74, max=2136.5k, avg=54742.95, stdev=310798.60 00:20:46.375 clat (msec): min=460, max=12245, avg=8058.10, stdev=4948.37 00:20:46.375 lat (msec): min=462, max=12257, avg=8112.84, stdev=4933.88 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 460], 5.00th=[ 489], 10.00th=[ 523], 20.00th=[ 617], 00:20:46.375 | 30.00th=[ 3339], 40.00th=[11342], 50.00th=[11342], 60.00th=[11476], 00:20:46.375 | 70.00th=[11610], 80.00th=[11610], 90.00th=[11745], 95.00th=[11745], 00:20:46.375 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 bw ( KiB/s): min= 1446, max=104448, per=0.75%, avg=20036.17, stdev=41389.72, samples=6 00:20:46.375 iops : min= 1, max= 102, avg=19.33, stdev=40.52, samples=6 00:20:46.375 lat (msec) : 500=5.91%, 750=21.51%, 2000=0.54%, >=2000=72.04% 00:20:46.375 cpu : usr=0.00%, sys=0.60%, ctx=125, majf=0, minf=32769 00:20:46.375 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:20:46.375 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578983: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=4, BW=4401KiB/s (4507kB/s)(53.0MiB/12332msec) 00:20:46.375 slat (usec): min=462, max=2119.0k, avg=193439.72, stdev=594277.27 00:20:46.375 clat (msec): min=2079, max=12331, avg=10550.32, stdev=2863.31 00:20:46.375 lat (msec): min=4145, max=12331, avg=10743.76, stdev=2615.62 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 6275], 20.00th=[ 8490], 00:20:46.375 | 30.00th=[10537], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:20:46.375 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.375 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 lat (msec) : >=2000=100.00% 00:20:46.375 cpu : usr=0.00%, sys=0.26%, ctx=67, majf=0, minf=13569 00:20:46.375 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.375 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578985: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=15, BW=15.5MiB/s (16.2MB/s)(191MiB/12354msec) 00:20:46.375 slat (usec): min=68, max=2100.8k, avg=53713.13, stdev=303487.13 00:20:46.375 clat (msec): min=516, max=12265, avg=7227.33, stdev=4082.21 00:20:46.375 lat (msec): min=519, max=12275, avg=7281.05, stdev=4078.90 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 518], 5.00th=[ 518], 10.00th=[ 542], 20.00th=[ 2123], 00:20:46.375 | 30.00th=[ 4178], 40.00th=[10000], 50.00th=[10134], 60.00th=[10134], 00:20:46.375 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:20:46.375 | 99.00th=[10671], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 bw ( KiB/s): min= 1943, max=110592, per=0.81%, avg=21827.83, stdev=43544.22, samples=6 00:20:46.375 iops : min= 1, max= 108, avg=21.17, stdev=42.61, samples=6 00:20:46.375 lat (msec) : 750=14.14%, >=2000=85.86% 00:20:46.375 cpu : usr=0.00%, sys=0.58%, ctx=149, majf=0, minf=32769 00:20:46.375 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=67.0% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:20:46.375 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578986: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=8, BW=8803KiB/s (9015kB/s)(106MiB/12330msec) 00:20:46.375 slat (usec): min=440, max=2099.3k, avg=96720.53, stdev=409042.82 00:20:46.375 clat (msec): min=2076, max=12328, avg=8554.83, stdev=2835.45 00:20:46.375 lat (msec): min=4175, max=12329, avg=8651.55, stdev=2786.81 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6275], 00:20:46.375 | 30.00th=[ 6409], 40.00th=[ 8423], 50.00th=[ 8490], 60.00th=[10537], 00:20:46.375 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:20:46.375 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 lat (msec) : >=2000=100.00% 00:20:46.375 cpu : usr=0.00%, sys=0.55%, ctx=62, majf=0, minf=27137 00:20:46.375 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.375 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578987: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=6, BW=6836KiB/s (7000kB/s)(82.0MiB/12283msec) 00:20:46.375 slat (usec): min=415, max=4200.1k, avg=124420.40, stdev=572211.59 00:20:46.375 clat (msec): min=2079, max=12281, avg=10140.00, stdev=3067.14 00:20:46.375 lat (msec): min=4151, max=12282, avg=10264.42, stdev=2940.45 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:20:46.375 | 30.00th=[10671], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:20:46.375 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:20:46.375 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.375 | 99.99th=[12281] 00:20:46.375 lat (msec) : >=2000=100.00% 00:20:46.375 cpu : usr=0.00%, sys=0.42%, ctx=72, majf=0, minf=20993 00:20:46.375 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.8%, 16=19.5%, 32=39.0%, >=64=23.2% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.375 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578988: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=0, BW=1000KiB/s (1024kB/s)(12.0MiB/12284msec) 00:20:46.375 slat (msec): min=7, max=4205, avg=850.11, stdev=1365.51 00:20:46.375 clat (msec): min=2081, max=10650, avg=6524.31, stdev=2808.69 00:20:46.375 lat (msec): min=4167, max=12283, avg=7374.41, stdev=2884.59 00:20:46.375 clat percentiles (msec): 00:20:46.375 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4178], 20.00th=[ 4178], 00:20:46.375 | 30.00th=[ 4178], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 6409], 00:20:46.375 | 70.00th=[ 6409], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:20:46.375 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:46.375 | 99.99th=[10671] 00:20:46.375 lat (msec) : >=2000=100.00% 00:20:46.375 cpu : usr=0.00%, sys=0.09%, ctx=35, majf=0, minf=3073 00:20:46.375 IO depths : 1=8.3%, 2=16.7%, 4=33.3%, 8=41.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.375 issued rwts: total=12,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.375 job1: (groupid=0, jobs=1): err= 0: pid=578989: Wed May 15 00:07:13 2024 00:20:46.375 read: IOPS=2, BW=2905KiB/s (2975kB/s)(35.0MiB/12337msec) 00:20:46.375 slat (usec): min=473, max=4210.8k, avg=292977.25, stdev=872103.49 00:20:46.375 clat (msec): min=2081, max=12334, avg=10252.07, stdev=3299.51 00:20:46.375 lat (msec): min=4167, max=12335, avg=10545.04, stdev=2993.78 00:20:46.375 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:20:46.376 | 30.00th=[10671], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:20:46.376 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.20%, ctx=55, majf=0, minf=8961 00:20:46.376 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.376 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job1: (groupid=0, jobs=1): err= 0: pid=578990: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=2, BW=2333KiB/s (2389kB/s)(28.0MiB/12291msec) 00:20:46.376 slat (usec): min=422, max=2137.6k, avg=363865.06, stdev=781928.85 00:20:46.376 clat (msec): min=2101, max=12288, avg=10197.08, stdev=2916.71 00:20:46.376 lat (msec): min=4203, max=12290, avg=10560.95, stdev=2470.83 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:20:46.376 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12281], 60.00th=[12281], 00:20:46.376 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.13%, ctx=33, majf=0, minf=7169 00:20:46.376 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job1: (groupid=0, jobs=1): err= 0: pid=578991: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=1, BW=2006KiB/s (2054kB/s)(24.0MiB/12253msec) 00:20:46.376 slat (usec): min=448, max=3724.6k, avg=423968.43, stdev=996682.95 00:20:46.376 clat (msec): min=2076, max=12251, avg=9836.72, stdev=3332.55 00:20:46.376 lat (msec): min=4145, max=12252, avg=10260.69, stdev=2924.71 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 6275], 20.00th=[ 6275], 00:20:46.376 | 30.00th=[ 6342], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:20:46.376 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.10%, ctx=30, majf=0, minf=6145 00:20:46.376 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job1: (groupid=0, jobs=1): err= 0: pid=578993: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=1, BW=1916KiB/s (1962kB/s)(23.0MiB/12293msec) 00:20:46.376 slat (usec): min=536, max=2118.3k, avg=443402.32, stdev=844265.18 00:20:46.376 clat (msec): min=2093, max=12291, avg=9222.84, stdev=3435.11 00:20:46.376 lat (msec): min=4194, max=12292, avg=9666.24, stdev=3116.47 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:46.376 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12281], 00:20:46.376 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.12%, ctx=48, majf=0, minf=5889 00:20:46.376 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job2: (groupid=0, jobs=1): err= 0: pid=579008: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=1, BW=1755KiB/s (1797kB/s)(21.0MiB/12256msec) 00:20:46.376 slat (usec): min=486, max=2122.2k, avg=484544.49, stdev=864715.43 00:20:46.376 clat (msec): min=2079, max=12254, avg=8929.28, stdev=3010.26 00:20:46.376 lat (msec): min=4152, max=12255, avg=9413.83, stdev=2650.02 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 6342], 00:20:46.376 | 30.00th=[ 8423], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10537], 00:20:46.376 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.11%, ctx=36, majf=0, minf=5377 00:20:46.376 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job2: (groupid=0, jobs=1): err= 0: pid=579010: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=28, BW=28.2MiB/s (29.6MB/s)(291MiB/10301msec) 00:20:46.376 slat (usec): min=82, max=2131.1k, avg=35048.22, stdev=238625.16 00:20:46.376 clat (msec): min=99, max=8979, avg=4302.39, stdev=3783.80 00:20:46.376 lat (msec): min=547, max=9028, avg=4337.44, stdev=3781.92 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 550], 5.00th=[ 600], 10.00th=[ 667], 20.00th=[ 860], 00:20:46.376 | 30.00th=[ 927], 40.00th=[ 1028], 50.00th=[ 1083], 60.00th=[ 6544], 00:20:46.376 | 70.00th=[ 8792], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:20:46.376 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:20:46.376 | 99.99th=[ 8926] 00:20:46.376 bw ( KiB/s): min= 2048, max=204800, per=1.78%, avg=47687.29, stdev=75977.24, samples=7 00:20:46.376 iops : min= 2, max= 200, avg=46.43, stdev=74.29, samples=7 00:20:46.376 lat (msec) : 100=0.34%, 750=13.06%, 1000=20.27%, 2000=17.87%, >=2000=48.45% 00:20:46.376 cpu : usr=0.04%, sys=1.14%, ctx=383, majf=0, minf=32769 00:20:46.376 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:20:46.376 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job2: (groupid=0, jobs=1): err= 0: pid=579011: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=2, BW=2993KiB/s (3065kB/s)(30.0MiB/10264msec) 00:20:46.376 slat (usec): min=428, max=2123.0k, avg=338728.05, stdev=759280.59 00:20:46.376 clat (msec): min=101, max=10262, avg=7572.13, stdev=3279.70 00:20:46.376 lat (msec): min=2135, max=10263, avg=7910.86, stdev=2993.85 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 103], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:20:46.376 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10268], 00:20:46.376 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.376 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.376 | 99.99th=[10268] 00:20:46.376 lat (msec) : 250=3.33%, >=2000=96.67% 00:20:46.376 cpu : usr=0.00%, sys=0.15%, ctx=52, majf=0, minf=7681 00:20:46.376 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job2: (groupid=0, jobs=1): err= 0: pid=579012: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=1, BW=1418KiB/s (1452kB/s)(17.0MiB/12278msec) 00:20:46.376 slat (msec): min=4, max=5610, avg=599.44, stdev=1462.97 00:20:46.376 clat (msec): min=2086, max=12211, avg=10349.46, stdev=3339.39 00:20:46.376 lat (msec): min=4138, max=12276, avg=10948.91, stdev=2595.09 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4144], 20.00th=[ 6342], 00:20:46.376 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12013], 60.00th=[12013], 00:20:46.376 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:46.376 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:46.376 | 99.99th=[12147] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.09%, ctx=56, majf=0, minf=4353 00:20:46.376 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.376 job2: (groupid=0, jobs=1): err= 0: pid=579013: Wed May 15 00:07:13 2024 00:20:46.376 read: IOPS=2, BW=2082KiB/s (2132kB/s)(25.0MiB/12296msec) 00:20:46.376 slat (usec): min=424, max=3795.4k, avg=408203.72, stdev=990899.82 00:20:46.376 clat (msec): min=2090, max=12292, avg=10892.73, stdev=2940.06 00:20:46.376 lat (msec): min=4155, max=12295, avg=11300.93, stdev=2307.33 00:20:46.376 clat percentiles (msec): 00:20:46.376 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 6275], 20.00th=[ 8423], 00:20:46.376 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:20:46.376 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.376 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.376 | 99.99th=[12281] 00:20:46.376 lat (msec) : >=2000=100.00% 00:20:46.376 cpu : usr=0.00%, sys=0.12%, ctx=34, majf=0, minf=6401 00:20:46.376 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:20:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.376 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.376 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579014: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=22, BW=22.9MiB/s (24.0MB/s)(235MiB/10250msec) 00:20:46.377 slat (usec): min=68, max=2227.5k, avg=43208.83, stdev=269101.01 00:20:46.377 clat (msec): min=93, max=9142, avg=5129.98, stdev=3728.58 00:20:46.377 lat (msec): min=531, max=9154, avg=5173.19, stdev=3717.74 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 523], 5.00th=[ 550], 10.00th=[ 567], 20.00th=[ 667], 00:20:46.377 | 30.00th=[ 1133], 40.00th=[ 2735], 50.00th=[ 4933], 60.00th=[ 8792], 00:20:46.377 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9060], 95.00th=[ 9060], 00:20:46.377 | 99.00th=[ 9060], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:20:46.377 | 99.99th=[ 9194] 00:20:46.377 bw ( KiB/s): min= 2048, max=151552, per=1.17%, avg=31299.71, stdev=53299.25, samples=7 00:20:46.377 iops : min= 2, max= 148, avg=30.29, stdev=52.19, samples=7 00:20:46.377 lat (msec) : 100=0.43%, 750=22.13%, 1000=4.68%, 2000=8.09%, >=2000=64.68% 00:20:46.377 cpu : usr=0.03%, sys=1.17%, ctx=334, majf=0, minf=32769 00:20:46.377 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:20:46.377 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579015: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=1, BW=1420KiB/s (1454kB/s)(17.0MiB/12261msec) 00:20:46.377 slat (usec): min=660, max=4280.4k, avg=598851.85, stdev=1207850.74 00:20:46.377 clat (msec): min=2079, max=12259, avg=10277.37, stdev=3344.02 00:20:46.377 lat (msec): min=4152, max=12260, avg=10876.22, stdev=2616.72 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 4144], 20.00th=[ 6342], 00:20:46.377 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:20:46.377 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.377 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.377 | 99.99th=[12281] 00:20:46.377 lat (msec) : >=2000=100.00% 00:20:46.377 cpu : usr=0.00%, sys=0.10%, ctx=26, majf=0, minf=4353 00:20:46.377 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.377 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579017: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=6, BW=6443KiB/s (6598kB/s)(65.0MiB/10330msec) 00:20:46.377 slat (usec): min=389, max=2126.3k, avg=157633.70, stdev=533400.74 00:20:46.377 clat (msec): min=82, max=10328, avg=8203.62, stdev=2952.06 00:20:46.377 lat (msec): min=2135, max=10329, avg=8361.25, stdev=2780.21 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 83], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 4329], 00:20:46.377 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10134], 60.00th=[10268], 00:20:46.377 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.377 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.377 | 99.99th=[10268] 00:20:46.377 lat (msec) : 100=1.54%, >=2000=98.46% 00:20:46.377 cpu : usr=0.00%, sys=0.41%, ctx=71, majf=0, minf=16641 00:20:46.377 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.377 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579018: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=3, BW=3785KiB/s (3876kB/s)(38.0MiB/10281msec) 00:20:46.377 slat (usec): min=486, max=2122.5k, avg=267911.69, stdev=677128.74 00:20:46.377 clat (msec): min=99, max=10280, avg=7961.27, stdev=3020.79 00:20:46.377 lat (msec): min=2141, max=10280, avg=8229.18, stdev=2743.45 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 101], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:20:46.377 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10134], 00:20:46.377 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.377 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.377 | 99.99th=[10268] 00:20:46.377 lat (msec) : 100=2.63%, >=2000=97.37% 00:20:46.377 cpu : usr=0.01%, sys=0.21%, ctx=57, majf=0, minf=9729 00:20:46.377 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.377 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579019: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(129MiB/10249msec) 00:20:46.377 slat (usec): min=433, max=2217.9k, avg=78668.69, stdev=367210.40 00:20:46.377 clat (msec): min=99, max=10186, avg=8899.35, stdev=1524.95 00:20:46.377 lat (msec): min=2143, max=10248, avg=8978.02, stdev=1314.55 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 8658], 20.00th=[ 8792], 00:20:46.377 | 30.00th=[ 8792], 40.00th=[ 8926], 50.00th=[ 9194], 60.00th=[ 9329], 00:20:46.377 | 70.00th=[ 9463], 80.00th=[ 9597], 90.00th=[10000], 95.00th=[10000], 00:20:46.377 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:20:46.377 | 99.99th=[10134] 00:20:46.377 bw ( KiB/s): min= 2048, max= 2048, per=0.08%, avg=2048.00, stdev= 0.00, samples=1 00:20:46.377 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 00:20:46.377 lat (msec) : 100=0.78%, >=2000=99.22% 00:20:46.377 cpu : usr=0.03%, sys=0.95%, ctx=228, majf=0, minf=32769 00:20:46.377 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.4%, 32=24.8%, >=64=51.2% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=66.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=33.3% 00:20:46.377 issued rwts: total=129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579020: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=89, BW=89.2MiB/s (93.6MB/s)(1099MiB/12314msec) 00:20:46.377 slat (usec): min=60, max=2072.6k, avg=9304.21, stdev=98692.43 00:20:46.377 clat (msec): min=175, max=6279, avg=1365.13, stdev=1814.40 00:20:46.377 lat (msec): min=177, max=6279, avg=1374.44, stdev=1819.48 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 188], 5.00th=[ 239], 10.00th=[ 376], 20.00th=[ 422], 00:20:46.377 | 30.00th=[ 430], 40.00th=[ 447], 50.00th=[ 502], 60.00th=[ 592], 00:20:46.377 | 70.00th=[ 642], 80.00th=[ 2567], 90.00th=[ 5805], 95.00th=[ 6007], 00:20:46.377 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:20:46.377 | 99.99th=[ 6275] 00:20:46.377 bw ( KiB/s): min= 1935, max=378880, per=6.18%, avg=165878.58, stdev=129836.84, samples=12 00:20:46.377 iops : min= 1, max= 370, avg=161.92, stdev=126.90, samples=12 00:20:46.377 lat (msec) : 250=5.73%, 500=43.95%, 750=25.30%, 1000=1.73%, >=2000=23.29% 00:20:46.377 cpu : usr=0.08%, sys=1.76%, ctx=909, majf=0, minf=32769 00:20:46.377 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:20:46.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.377 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.377 issued rwts: total=1099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.377 job2: (groupid=0, jobs=1): err= 0: pid=579021: Wed May 15 00:07:13 2024 00:20:46.377 read: IOPS=6, BW=6229KiB/s (6378kB/s)(75.0MiB/12330msec) 00:20:46.377 slat (usec): min=473, max=2173.4k, avg=136575.22, stdev=492445.12 00:20:46.377 clat (msec): min=2086, max=12329, avg=11257.96, stdev=1854.94 00:20:46.377 lat (msec): min=4146, max=12329, avg=11394.54, stdev=1516.80 00:20:46.377 clat percentiles (msec): 00:20:46.377 | 1.00th=[ 2089], 5.00th=[ 8423], 10.00th=[ 8557], 20.00th=[10671], 00:20:46.377 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:20:46.377 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:20:46.377 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.377 | 99.99th=[12281] 00:20:46.377 lat (msec) : >=2000=100.00% 00:20:46.377 cpu : usr=0.00%, sys=0.35%, ctx=99, majf=0, minf=19201 00:20:46.377 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.378 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job2: (groupid=0, jobs=1): err= 0: pid=579022: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=17, BW=17.4MiB/s (18.3MB/s)(179MiB/10261msec) 00:20:46.378 slat (usec): min=117, max=2227.5k, avg=56787.06, stdev=303811.96 00:20:46.378 clat (msec): min=94, max=9432, avg=6628.19, stdev=2702.07 00:20:46.378 lat (msec): min=1399, max=9434, avg=6684.98, stdev=2662.38 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 1385], 5.00th=[ 2056], 10.00th=[ 2165], 20.00th=[ 4212], 00:20:46.378 | 30.00th=[ 5403], 40.00th=[ 5671], 50.00th=[ 7416], 60.00th=[ 8792], 00:20:46.378 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9329], 00:20:46.378 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:20:46.378 | 99.99th=[ 9463] 00:20:46.378 bw ( KiB/s): min= 6144, max=45056, per=0.78%, avg=20885.40, stdev=16476.75, samples=5 00:20:46.378 iops : min= 6, max= 44, avg=20.20, stdev=16.25, samples=5 00:20:46.378 lat (msec) : 100=0.56%, 2000=3.35%, >=2000=96.09% 00:20:46.378 cpu : usr=0.00%, sys=1.05%, ctx=330, majf=0, minf=32769 00:20:46.378 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=8.9%, 32=17.9%, >=64=64.8% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:20:46.378 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579032: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=4, BW=4789KiB/s (4904kB/s)(48.0MiB/10264msec) 00:20:46.378 slat (usec): min=445, max=2102.4k, avg=211355.76, stdev=611396.55 00:20:46.378 clat (msec): min=118, max=10262, avg=6946.43, stdev=3038.76 00:20:46.378 lat (msec): min=2139, max=10263, avg=7157.78, stdev=2903.53 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 118], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4329], 00:20:46.378 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:20:46.378 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.378 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.378 | 99.99th=[10268] 00:20:46.378 lat (msec) : 250=2.08%, >=2000=97.92% 00:20:46.378 cpu : usr=0.00%, sys=0.29%, ctx=56, majf=0, minf=12289 00:20:46.378 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.378 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579033: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=3, BW=3395KiB/s (3477kB/s)(34.0MiB/10255msec) 00:20:46.378 slat (msec): min=5, max=2125, avg=298.37, stdev=683.84 00:20:46.378 clat (msec): min=109, max=10141, avg=6484.90, stdev=2542.13 00:20:46.378 lat (msec): min=2134, max=10254, avg=6783.27, stdev=2359.98 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 110], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:20:46.378 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8490], 00:20:46.378 | 70.00th=[ 8490], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[10134], 00:20:46.378 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:20:46.378 | 99.99th=[10134] 00:20:46.378 lat (msec) : 250=2.94%, >=2000=97.06% 00:20:46.378 cpu : usr=0.02%, sys=0.21%, ctx=92, majf=0, minf=8705 00:20:46.378 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.378 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579034: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=13, BW=13.1MiB/s (13.8MB/s)(135MiB/10287msec) 00:20:46.378 slat (usec): min=456, max=2122.9k, avg=75390.75, stdev=351533.70 00:20:46.378 clat (msec): min=108, max=10270, avg=6231.85, stdev=1396.00 00:20:46.378 lat (msec): min=2141, max=10271, avg=6307.24, stdev=1336.00 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 2140], 5.00th=[ 4463], 10.00th=[ 5805], 20.00th=[ 5873], 00:20:46.378 | 30.00th=[ 5940], 40.00th=[ 6007], 50.00th=[ 6074], 60.00th=[ 6141], 00:20:46.378 | 70.00th=[ 6141], 80.00th=[ 6275], 90.00th=[ 8087], 95.00th=[10134], 00:20:46.378 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.378 | 99.99th=[10268] 00:20:46.378 bw ( KiB/s): min= 2048, max= 6144, per=0.18%, avg=4778.67, stdev=2364.83, samples=3 00:20:46.378 iops : min= 2, max= 6, avg= 4.67, stdev= 2.31, samples=3 00:20:46.378 lat (msec) : 250=0.74%, >=2000=99.26% 00:20:46.378 cpu : usr=0.00%, sys=0.87%, ctx=138, majf=0, minf=32769 00:20:46.378 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=5.9%, 16=11.9%, 32=23.7%, >=64=53.3% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=88.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=11.1% 00:20:46.378 issued rwts: total=135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579035: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=1, BW=1297KiB/s (1329kB/s)(13.0MiB/10260msec) 00:20:46.378 slat (msec): min=9, max=2128, avg=780.21, stdev=997.71 00:20:46.378 clat (msec): min=116, max=10128, avg=4938.40, stdev=2705.98 00:20:46.378 lat (msec): min=2147, max=10259, avg=5718.61, stdev=2661.70 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 117], 5.00th=[ 117], 10.00th=[ 2165], 20.00th=[ 2165], 00:20:46.378 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 4396], 00:20:46.378 | 70.00th=[ 6477], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[10134], 00:20:46.378 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:20:46.378 | 99.99th=[10134] 00:20:46.378 lat (msec) : 250=7.69%, >=2000=92.31% 00:20:46.378 cpu : usr=0.00%, sys=0.10%, ctx=37, majf=0, minf=3329 00:20:46.378 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579036: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=7, BW=7439KiB/s (7618kB/s)(74.0MiB/10186msec) 00:20:46.378 slat (usec): min=499, max=2009.5k, avg=136732.62, stdev=468498.27 00:20:46.378 clat (msec): min=66, max=10183, avg=5884.33, stdev=2256.71 00:20:46.378 lat (msec): min=2076, max=10185, avg=6021.07, stdev=2205.36 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 67], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4212], 00:20:46.378 | 30.00th=[ 6141], 40.00th=[ 6141], 50.00th=[ 6275], 60.00th=[ 6275], 00:20:46.378 | 70.00th=[ 6342], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[10000], 00:20:46.378 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:20:46.378 | 99.99th=[10134] 00:20:46.378 lat (msec) : 100=1.35%, >=2000=98.65% 00:20:46.378 cpu : usr=0.00%, sys=0.54%, ctx=104, majf=0, minf=18945 00:20:46.378 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.378 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579037: Wed May 15 00:07:13 2024 00:20:46.378 read: IOPS=2, BW=2987KiB/s (3059kB/s)(30.0MiB/10283msec) 00:20:46.378 slat (usec): min=458, max=2129.0k, avg=338890.50, stdev=750241.36 00:20:46.378 clat (msec): min=115, max=10282, avg=7833.15, stdev=3284.26 00:20:46.378 lat (msec): min=2136, max=10282, avg=8172.04, stdev=2969.95 00:20:46.378 clat percentiles (msec): 00:20:46.378 | 1.00th=[ 116], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:20:46.378 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10268], 60.00th=[10268], 00:20:46.378 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.378 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.378 | 99.99th=[10268] 00:20:46.378 lat (msec) : 250=3.33%, >=2000=96.67% 00:20:46.378 cpu : usr=0.00%, sys=0.18%, ctx=48, majf=0, minf=7681 00:20:46.378 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:20:46.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.378 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:46.378 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.378 job3: (groupid=0, jobs=1): err= 0: pid=579038: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=3, BW=3391KiB/s (3473kB/s)(34.0MiB/10266msec) 00:20:46.379 slat (usec): min=426, max=2118.2k, avg=298788.37, stdev=709674.11 00:20:46.379 clat (msec): min=106, max=10264, avg=8189.96, stdev=2814.94 00:20:46.379 lat (msec): min=2147, max=10265, avg=8488.75, stdev=2445.93 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 107], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:20:46.379 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10268], 00:20:46.379 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.379 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.379 | 99.99th=[10268] 00:20:46.379 lat (msec) : 250=2.94%, >=2000=97.06% 00:20:46.379 cpu : usr=0.00%, sys=0.21%, ctx=50, majf=0, minf=8705 00:20:46.379 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.379 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579039: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=63, BW=63.6MiB/s (66.7MB/s)(655MiB/10291msec) 00:20:46.379 slat (usec): min=59, max=2112.4k, avg=15487.77, stdev=132863.68 00:20:46.379 clat (msec): min=141, max=8625, avg=1931.25, stdev=1560.03 00:20:46.379 lat (msec): min=346, max=9375, avg=1946.74, stdev=1576.83 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 351], 5.00th=[ 376], 10.00th=[ 451], 20.00th=[ 768], 00:20:46.379 | 30.00th=[ 810], 40.00th=[ 835], 50.00th=[ 860], 60.00th=[ 2232], 00:20:46.379 | 70.00th=[ 3373], 80.00th=[ 3608], 90.00th=[ 4077], 95.00th=[ 4396], 00:20:46.379 | 99.00th=[ 4597], 99.50th=[ 6477], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.379 | 99.99th=[ 8658] 00:20:46.379 bw ( KiB/s): min= 2048, max=223232, per=4.02%, avg=107929.60, stdev=74489.23, samples=10 00:20:46.379 iops : min= 2, max= 218, avg=105.40, stdev=72.74, samples=10 00:20:46.379 lat (msec) : 250=0.15%, 500=12.37%, 750=5.95%, 1000=39.69%, 2000=0.46% 00:20:46.379 lat (msec) : >=2000=41.37% 00:20:46.379 cpu : usr=0.02%, sys=1.34%, ctx=521, majf=0, minf=32769 00:20:46.379 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:46.379 issued rwts: total=655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579041: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=29, BW=29.5MiB/s (30.9MB/s)(303MiB/10268msec) 00:20:46.379 slat (usec): min=55, max=2097.5k, avg=33522.68, stdev=220980.43 00:20:46.379 clat (msec): min=108, max=8253, avg=2311.02, stdev=2090.40 00:20:46.379 lat (msec): min=546, max=8255, avg=2344.54, stdev=2112.28 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 558], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 709], 00:20:46.379 | 30.00th=[ 1284], 40.00th=[ 1485], 50.00th=[ 1670], 60.00th=[ 1838], 00:20:46.379 | 70.00th=[ 2433], 80.00th=[ 2500], 90.00th=[ 6678], 95.00th=[ 8154], 00:20:46.379 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:20:46.379 | 99.99th=[ 8221] 00:20:46.379 bw ( KiB/s): min=36864, max=172032, per=4.45%, avg=119466.67, stdev=72417.39, samples=3 00:20:46.379 iops : min= 36, max= 168, avg=116.67, stdev=70.72, samples=3 00:20:46.379 lat (msec) : 250=0.33%, 750=22.44%, 2000=41.91%, >=2000=35.31% 00:20:46.379 cpu : usr=0.01%, sys=1.15%, ctx=207, majf=0, minf=32769 00:20:46.379 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:20:46.379 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579042: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=111, BW=112MiB/s (117MB/s)(1148MiB/10289msec) 00:20:46.379 slat (usec): min=61, max=2141.4k, avg=8831.17, stdev=93136.30 00:20:46.379 clat (msec): min=143, max=6460, avg=1090.96, stdev=1145.75 00:20:46.379 lat (msec): min=299, max=6462, avg=1099.79, stdev=1153.92 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 305], 5.00th=[ 342], 10.00th=[ 355], 20.00th=[ 401], 00:20:46.379 | 30.00th=[ 418], 40.00th=[ 451], 50.00th=[ 506], 60.00th=[ 667], 00:20:46.379 | 70.00th=[ 760], 80.00th=[ 2500], 90.00th=[ 3239], 95.00th=[ 3406], 00:20:46.379 | 99.00th=[ 4329], 99.50th=[ 6409], 99.90th=[ 6477], 99.95th=[ 6477], 00:20:46.379 | 99.99th=[ 6477] 00:20:46.379 bw ( KiB/s): min=16384, max=329728, per=6.48%, avg=174080.00, stdev=115228.65, samples=12 00:20:46.379 iops : min= 16, max= 322, avg=170.00, stdev=112.53, samples=12 00:20:46.379 lat (msec) : 250=0.09%, 500=47.56%, 750=21.78%, 1000=6.88%, 2000=1.31% 00:20:46.379 lat (msec) : >=2000=22.39% 00:20:46.379 cpu : usr=0.06%, sys=1.86%, ctx=934, majf=0, minf=32769 00:20:46.379 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.379 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579043: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=4, BW=5068KiB/s (5190kB/s)(51.0MiB/10304msec) 00:20:46.379 slat (usec): min=503, max=2145.8k, avg=200282.02, stdev=591392.32 00:20:46.379 clat (msec): min=89, max=10303, avg=8444.26, stdev=3279.15 00:20:46.379 lat (msec): min=2035, max=10303, avg=8644.54, stdev=3063.49 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 90], 5.00th=[ 2056], 10.00th=[ 2123], 20.00th=[ 6409], 00:20:46.379 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:20:46.379 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.379 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.379 | 99.99th=[10268] 00:20:46.379 lat (msec) : 100=1.96%, >=2000=98.04% 00:20:46.379 cpu : usr=0.00%, sys=0.31%, ctx=83, majf=0, minf=13057 00:20:46.379 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.379 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579044: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=132, BW=132MiB/s (139MB/s)(1333MiB/10067msec) 00:20:46.379 slat (usec): min=49, max=2120.4k, avg=7540.04, stdev=74047.24 00:20:46.379 clat (msec): min=7, max=6445, avg=620.67, stdev=520.10 00:20:46.379 lat (msec): min=88, max=6448, avg=628.21, stdev=541.16 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 107], 5.00th=[ 288], 10.00th=[ 309], 20.00th=[ 347], 00:20:46.379 | 30.00th=[ 388], 40.00th=[ 443], 50.00th=[ 592], 60.00th=[ 651], 00:20:46.379 | 70.00th=[ 735], 80.00th=[ 810], 90.00th=[ 852], 95.00th=[ 869], 00:20:46.379 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 6409], 99.95th=[ 6477], 00:20:46.379 | 99.99th=[ 6477] 00:20:46.379 bw ( KiB/s): min=129024, max=401408, per=7.65%, avg=205463.42, stdev=74599.59, samples=12 00:20:46.379 iops : min= 126, max= 392, avg=200.58, stdev=72.86, samples=12 00:20:46.379 lat (msec) : 10=0.08%, 100=0.75%, 250=1.88%, 500=40.59%, 750=30.83% 00:20:46.379 lat (msec) : 1000=24.31%, >=2000=1.58% 00:20:46.379 cpu : usr=0.09%, sys=1.71%, ctx=878, majf=0, minf=32769 00:20:46.379 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.379 issued rwts: total=1333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job3: (groupid=0, jobs=1): err= 0: pid=579045: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=3, BW=4023KiB/s (4120kB/s)(40.0MiB/10181msec) 00:20:46.379 slat (usec): min=424, max=2116.8k, avg=252702.80, stdev=657600.51 00:20:46.379 clat (msec): min=72, max=8574, avg=4714.26, stdev=2749.07 00:20:46.379 lat (msec): min=2088, max=10180, avg=4966.97, stdev=2775.87 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 72], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 2089], 00:20:46.379 | 30.00th=[ 2106], 40.00th=[ 2106], 50.00th=[ 4245], 60.00th=[ 6342], 00:20:46.379 | 70.00th=[ 6342], 80.00th=[ 8423], 90.00th=[ 8557], 95.00th=[ 8557], 00:20:46.379 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:20:46.379 | 99.99th=[ 8557] 00:20:46.379 lat (msec) : 100=2.50%, >=2000=97.50% 00:20:46.379 cpu : usr=0.00%, sys=0.29%, ctx=68, majf=0, minf=10241 00:20:46.379 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.379 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job4: (groupid=0, jobs=1): err= 0: pid=579072: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=22, BW=22.3MiB/s (23.4MB/s)(229MiB/10248msec) 00:20:46.379 slat (usec): min=74, max=2080.3k, avg=44112.15, stdev=265457.11 00:20:46.379 clat (msec): min=144, max=9958, avg=5150.93, stdev=2434.17 00:20:46.379 lat (msec): min=1794, max=9961, avg=5195.04, stdev=2429.19 00:20:46.379 clat percentiles (msec): 00:20:46.379 | 1.00th=[ 1787], 5.00th=[ 1838], 10.00th=[ 1854], 20.00th=[ 1888], 00:20:46.379 | 30.00th=[ 4144], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 4463], 00:20:46.379 | 70.00th=[ 8154], 80.00th=[ 8221], 90.00th=[ 8356], 95.00th=[ 8356], 00:20:46.379 | 99.00th=[ 8423], 99.50th=[ 8658], 99.90th=[10000], 99.95th=[10000], 00:20:46.379 | 99.99th=[10000] 00:20:46.379 bw ( KiB/s): min= 2048, max=61440, per=1.28%, avg=34454.83, stdev=27064.50, samples=6 00:20:46.379 iops : min= 2, max= 60, avg=33.50, stdev=26.26, samples=6 00:20:46.379 lat (msec) : 250=0.44%, 2000=20.52%, >=2000=79.04% 00:20:46.379 cpu : usr=0.01%, sys=0.70%, ctx=130, majf=0, minf=32769 00:20:46.379 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.5% 00:20:46.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.379 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:20:46.379 issued rwts: total=229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.379 job4: (groupid=0, jobs=1): err= 0: pid=579073: Wed May 15 00:07:13 2024 00:20:46.379 read: IOPS=38, BW=38.9MiB/s (40.8MB/s)(476MiB/12247msec) 00:20:46.379 slat (usec): min=41, max=2136.6k, avg=21218.62, stdev=173404.31 00:20:46.379 clat (msec): min=281, max=7042, avg=2182.94, stdev=2685.79 00:20:46.380 lat (msec): min=281, max=7043, avg=2204.16, stdev=2694.71 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 288], 5.00th=[ 300], 10.00th=[ 326], 20.00th=[ 347], 00:20:46.380 | 30.00th=[ 388], 40.00th=[ 439], 50.00th=[ 535], 60.00th=[ 743], 00:20:46.380 | 70.00th=[ 2140], 80.00th=[ 6208], 90.00th=[ 6879], 95.00th=[ 6946], 00:20:46.380 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:20:46.380 | 99.99th=[ 7013] 00:20:46.380 bw ( KiB/s): min= 1446, max=378880, per=4.43%, avg=119007.83, stdev=156373.47, samples=6 00:20:46.380 iops : min= 1, max= 370, avg=116.00, stdev=152.85, samples=6 00:20:46.380 lat (msec) : 500=48.74%, 750=12.39%, 1000=6.93%, 2000=1.89%, >=2000=30.04% 00:20:46.380 cpu : usr=0.01%, sys=0.93%, ctx=318, majf=0, minf=32769 00:20:46.380 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.8% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:46.380 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579075: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=10, BW=10.4MiB/s (10.9MB/s)(107MiB/10256msec) 00:20:46.380 slat (usec): min=414, max=2069.0k, avg=94435.40, stdev=398437.69 00:20:46.380 clat (msec): min=150, max=10254, avg=5768.68, stdev=2576.35 00:20:46.380 lat (msec): min=2064, max=10255, avg=5863.11, stdev=2553.56 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2123], 20.00th=[ 4111], 00:20:46.380 | 30.00th=[ 4144], 40.00th=[ 4178], 50.00th=[ 4279], 60.00th=[ 6544], 00:20:46.380 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:20:46.380 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.380 | 99.99th=[10268] 00:20:46.380 lat (msec) : 250=0.93%, >=2000=99.07% 00:20:46.380 cpu : usr=0.00%, sys=0.74%, ctx=89, majf=0, minf=27393 00:20:46.380 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.5%, 16=15.0%, 32=29.9%, >=64=41.1% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:46.380 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579076: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=4, BW=4790KiB/s (4905kB/s)(48.0MiB/10261msec) 00:20:46.380 slat (usec): min=448, max=2086.7k, avg=210636.88, stdev=594166.00 00:20:46.380 clat (msec): min=149, max=10260, avg=7069.68, stdev=3081.71 00:20:46.380 lat (msec): min=2138, max=10260, avg=7280.32, stdev=2941.00 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 150], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:20:46.380 | 30.00th=[ 6342], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:20:46.380 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:20:46.380 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:20:46.380 | 99.99th=[10268] 00:20:46.380 lat (msec) : 250=2.08%, >=2000=97.92% 00:20:46.380 cpu : usr=0.00%, sys=0.31%, ctx=79, majf=0, minf=12289 00:20:46.380 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:46.380 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579077: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(171MiB/10309msec) 00:20:46.380 slat (usec): min=113, max=1998.2k, avg=59571.04, stdev=306087.36 00:20:46.380 clat (msec): min=120, max=8672, avg=6387.45, stdev=2237.21 00:20:46.380 lat (msec): min=2116, max=10230, avg=6447.02, stdev=2202.96 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 2123], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 4044], 00:20:46.380 | 30.00th=[ 5940], 40.00th=[ 6275], 50.00th=[ 8087], 60.00th=[ 8154], 00:20:46.380 | 70.00th=[ 8221], 80.00th=[ 8288], 90.00th=[ 8356], 95.00th=[ 8356], 00:20:46.380 | 99.00th=[ 8557], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.380 | 99.99th=[ 8658] 00:20:46.380 bw ( KiB/s): min= 4096, max=40960, per=0.55%, avg=14677.33, stdev=13915.35, samples=6 00:20:46.380 iops : min= 4, max= 40, avg=14.33, stdev=13.59, samples=6 00:20:46.380 lat (msec) : 250=0.58%, >=2000=99.42% 00:20:46.380 cpu : usr=0.00%, sys=0.74%, ctx=173, majf=0, minf=32769 00:20:46.380 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.4%, 32=18.7%, >=64=63.2% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:20:46.380 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579078: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=15, BW=15.1MiB/s (15.8MB/s)(156MiB/10329msec) 00:20:46.380 slat (usec): min=129, max=2079.5k, avg=65239.24, stdev=319711.67 00:20:46.380 clat (msec): min=149, max=8651, avg=5788.69, stdev=1376.58 00:20:46.380 lat (msec): min=2147, max=8665, avg=5853.93, stdev=1330.86 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 2140], 5.00th=[ 2232], 10.00th=[ 4396], 20.00th=[ 5738], 00:20:46.380 | 30.00th=[ 5805], 40.00th=[ 5873], 50.00th=[ 5940], 60.00th=[ 6007], 00:20:46.380 | 70.00th=[ 6074], 80.00th=[ 6208], 90.00th=[ 7953], 95.00th=[ 8490], 00:20:46.380 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.380 | 99.99th=[ 8658] 00:20:46.380 bw ( KiB/s): min= 4087, max=22528, per=0.53%, avg=14333.75, stdev=8023.35, samples=4 00:20:46.380 iops : min= 3, max= 22, avg=13.75, stdev= 8.26, samples=4 00:20:46.380 lat (msec) : 250=0.64%, >=2000=99.36% 00:20:46.380 cpu : usr=0.00%, sys=1.05%, ctx=222, majf=0, minf=32769 00:20:46.380 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:20:46.380 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579079: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=159, BW=160MiB/s (168MB/s)(1615MiB/10098msec) 00:20:46.380 slat (usec): min=47, max=2052.2k, avg=6194.96, stdev=63267.06 00:20:46.380 clat (msec): min=84, max=4299, avg=540.75, stdev=462.29 00:20:46.380 lat (msec): min=106, max=4348, avg=546.95, stdev=472.01 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 178], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 334], 00:20:46.380 | 30.00th=[ 388], 40.00th=[ 426], 50.00th=[ 439], 60.00th=[ 485], 00:20:46.380 | 70.00th=[ 592], 80.00th=[ 625], 90.00th=[ 751], 95.00th=[ 827], 00:20:46.380 | 99.00th=[ 2769], 99.50th=[ 4279], 99.90th=[ 4329], 99.95th=[ 4329], 00:20:46.380 | 99.99th=[ 4329] 00:20:46.380 bw ( KiB/s): min=94208, max=431265, per=9.44%, avg=253657.33, stdev=101486.59, samples=12 00:20:46.380 iops : min= 92, max= 421, avg=247.58, stdev=99.08, samples=12 00:20:46.380 lat (msec) : 100=0.06%, 250=1.86%, 500=58.95%, 750=28.79%, 1000=8.42% 00:20:46.380 lat (msec) : >=2000=1.92% 00:20:46.380 cpu : usr=0.09%, sys=2.02%, ctx=1177, majf=0, minf=32769 00:20:46.380 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.380 issued rwts: total=1615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579080: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=100, BW=101MiB/s (106MB/s)(1041MiB/10329msec) 00:20:46.380 slat (usec): min=35, max=2058.0k, avg=9793.32, stdev=99016.32 00:20:46.380 clat (msec): min=124, max=4787, avg=811.49, stdev=822.40 00:20:46.380 lat (msec): min=273, max=4788, avg=821.29, stdev=832.42 00:20:46.380 clat percentiles (msec): 00:20:46.380 | 1.00th=[ 279], 5.00th=[ 305], 10.00th=[ 309], 20.00th=[ 334], 00:20:46.380 | 30.00th=[ 363], 40.00th=[ 405], 50.00th=[ 485], 60.00th=[ 642], 00:20:46.380 | 70.00th=[ 709], 80.00th=[ 827], 90.00th=[ 2333], 95.00th=[ 2467], 00:20:46.380 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:20:46.380 | 99.99th=[ 4799] 00:20:46.380 bw ( KiB/s): min=71680, max=438272, per=8.70%, avg=233681.50, stdev=132760.11, samples=8 00:20:46.380 iops : min= 70, max= 428, avg=228.12, stdev=129.68, samples=8 00:20:46.380 lat (msec) : 250=0.10%, 500=51.10%, 750=23.82%, 1000=8.93%, 2000=1.83% 00:20:46.380 lat (msec) : >=2000=14.22% 00:20:46.380 cpu : usr=0.09%, sys=1.90%, ctx=766, majf=0, minf=32769 00:20:46.380 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:20:46.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.380 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.380 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.380 job4: (groupid=0, jobs=1): err= 0: pid=579081: Wed May 15 00:07:13 2024 00:20:46.380 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(151MiB/12267msec) 00:20:46.380 slat (usec): min=131, max=2136.6k, avg=67035.22, stdev=335184.84 00:20:46.380 clat (msec): min=2143, max=12265, avg=8003.69, stdev=2455.55 00:20:46.380 lat (msec): min=4280, max=12266, avg=8070.73, stdev=2426.17 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 4279], 5.00th=[ 4279], 10.00th=[ 5873], 20.00th=[ 6141], 00:20:46.381 | 30.00th=[ 6208], 40.00th=[ 6275], 50.00th=[ 7617], 60.00th=[ 8356], 00:20:46.381 | 70.00th=[ 8490], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:20:46.381 | 99.00th=[11879], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:20:46.381 | 99.99th=[12281] 00:20:46.381 bw ( KiB/s): min= 1389, max=30658, per=0.60%, avg=16143.67, stdev=14635.98, samples=3 00:20:46.381 iops : min= 1, max= 29, avg=15.33, stdev=14.01, samples=3 00:20:46.381 lat (msec) : >=2000=100.00% 00:20:46.381 cpu : usr=0.03%, sys=0.85%, ctx=96, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.6%, 32=21.2%, >=64=58.3% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=96.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.0% 00:20:46.381 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job4: (groupid=0, jobs=1): err= 0: pid=579082: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=36, BW=36.8MiB/s (38.6MB/s)(379MiB/10286msec) 00:20:46.381 slat (usec): min=45, max=2107.9k, avg=26751.19, stdev=199034.71 00:20:46.381 clat (msec): min=144, max=6706, avg=1733.24, stdev=1469.39 00:20:46.381 lat (msec): min=614, max=6708, avg=1759.99, stdev=1487.54 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 617], 5.00th=[ 642], 10.00th=[ 676], 20.00th=[ 709], 00:20:46.381 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 768], 60.00th=[ 2265], 00:20:46.381 | 70.00th=[ 2500], 80.00th=[ 2735], 90.00th=[ 2937], 95.00th=[ 5067], 00:20:46.381 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:20:46.381 | 99.99th=[ 6678] 00:20:46.381 bw ( KiB/s): min=24576, max=178176, per=4.79%, avg=128512.00, stdev=70279.03, samples=4 00:20:46.381 iops : min= 24, max= 174, avg=125.50, stdev=68.63, samples=4 00:20:46.381 lat (msec) : 250=0.26%, 750=45.12%, 1000=12.40%, >=2000=42.22% 00:20:46.381 cpu : usr=0.02%, sys=1.27%, ctx=369, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.381 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job4: (groupid=0, jobs=1): err= 0: pid=579084: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=97, BW=97.4MiB/s (102MB/s)(978MiB/10036msec) 00:20:46.381 slat (usec): min=43, max=2056.9k, avg=10218.64, stdev=106025.88 00:20:46.381 clat (msec): min=34, max=4746, avg=868.79, stdev=961.08 00:20:46.381 lat (msec): min=36, max=4749, avg=879.01, stdev=970.76 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 284], 20.00th=[ 330], 00:20:46.381 | 30.00th=[ 355], 40.00th=[ 405], 50.00th=[ 485], 60.00th=[ 575], 00:20:46.381 | 70.00th=[ 676], 80.00th=[ 944], 90.00th=[ 2735], 95.00th=[ 2802], 00:20:46.381 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:20:46.381 | 99.99th=[ 4732] 00:20:46.381 bw ( KiB/s): min=159744, max=405504, per=9.27%, avg=248978.29, stdev=87896.01, samples=7 00:20:46.381 iops : min= 156, max= 396, avg=243.14, stdev=85.84, samples=7 00:20:46.381 lat (msec) : 50=0.31%, 100=1.64%, 250=6.44%, 500=43.76%, 750=23.52% 00:20:46.381 lat (msec) : 1000=6.65%, 2000=1.64%, >=2000=16.05% 00:20:46.381 cpu : usr=0.06%, sys=1.73%, ctx=627, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.381 issued rwts: total=978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job4: (groupid=0, jobs=1): err= 0: pid=579085: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(168MiB/10299msec) 00:20:46.381 slat (usec): min=104, max=2041.3k, avg=60592.74, stdev=301768.44 00:20:46.381 clat (msec): min=117, max=8674, avg=5554.68, stdev=1569.13 00:20:46.381 lat (msec): min=2117, max=8674, avg=5615.27, stdev=1538.40 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 2123], 5.00th=[ 2265], 10.00th=[ 2265], 20.00th=[ 4463], 00:20:46.381 | 30.00th=[ 5805], 40.00th=[ 5873], 50.00th=[ 5940], 60.00th=[ 6007], 00:20:46.381 | 70.00th=[ 6141], 80.00th=[ 6342], 90.00th=[ 6544], 95.00th=[ 7953], 00:20:46.381 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.381 | 99.99th=[ 8658] 00:20:46.381 bw ( KiB/s): min= 4104, max=40960, per=0.76%, avg=20482.00, stdev=16635.40, samples=4 00:20:46.381 iops : min= 4, max= 40, avg=20.00, stdev=16.25, samples=4 00:20:46.381 lat (msec) : 250=0.60%, >=2000=99.40% 00:20:46.381 cpu : usr=0.00%, sys=1.00%, ctx=205, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.5%, 32=19.0%, >=64=62.5% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:20:46.381 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job4: (groupid=0, jobs=1): err= 0: pid=579086: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=122, BW=122MiB/s (128MB/s)(1232MiB/10080msec) 00:20:46.381 slat (usec): min=58, max=2146.8k, avg=8141.81, stdev=84326.70 00:20:46.381 clat (msec): min=41, max=5000, avg=996.16, stdev=1264.35 00:20:46.381 lat (msec): min=98, max=5000, avg=1004.30, stdev=1269.22 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 115], 5.00th=[ 397], 10.00th=[ 422], 20.00th=[ 451], 00:20:46.381 | 30.00th=[ 468], 40.00th=[ 498], 50.00th=[ 558], 60.00th=[ 609], 00:20:46.381 | 70.00th=[ 726], 80.00th=[ 793], 90.00th=[ 2869], 95.00th=[ 4866], 00:20:46.381 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:20:46.381 | 99.99th=[ 5000] 00:20:46.381 bw ( KiB/s): min= 2048, max=288768, per=6.48%, avg=173908.31, stdev=94652.36, samples=13 00:20:46.381 iops : min= 2, max= 282, avg=169.77, stdev=92.49, samples=13 00:20:46.381 lat (msec) : 50=0.08%, 100=0.08%, 250=2.35%, 500=38.23%, 750=35.23% 00:20:46.381 lat (msec) : 1000=12.34%, >=2000=11.69% 00:20:46.381 cpu : usr=0.09%, sys=1.69%, ctx=906, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.381 issued rwts: total=1232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job5: (groupid=0, jobs=1): err= 0: pid=579104: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=84, BW=84.0MiB/s (88.1MB/s)(845MiB/10054msec) 00:20:46.381 slat (usec): min=44, max=2082.3k, avg=11862.68, stdev=115508.64 00:20:46.381 clat (msec): min=23, max=4493, avg=1008.79, stdev=993.17 00:20:46.381 lat (msec): min=59, max=4495, avg=1020.65, stdev=1002.13 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 70], 5.00th=[ 268], 10.00th=[ 351], 20.00th=[ 409], 00:20:46.381 | 30.00th=[ 435], 40.00th=[ 456], 50.00th=[ 535], 60.00th=[ 743], 00:20:46.381 | 70.00th=[ 961], 80.00th=[ 1011], 90.00th=[ 2869], 95.00th=[ 3004], 00:20:46.381 | 99.00th=[ 4396], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:20:46.381 | 99.99th=[ 4463] 00:20:46.381 bw ( KiB/s): min=26624, max=303711, per=6.84%, avg=183700.62, stdev=104520.75, samples=8 00:20:46.381 iops : min= 26, max= 296, avg=179.25, stdev=101.89, samples=8 00:20:46.381 lat (msec) : 50=0.12%, 100=2.01%, 250=1.78%, 500=45.09%, 750=11.01% 00:20:46.381 lat (msec) : 1000=19.17%, 2000=2.49%, >=2000=18.34% 00:20:46.381 cpu : usr=0.05%, sys=1.42%, ctx=540, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.5% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.381 issued rwts: total=845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job5: (groupid=0, jobs=1): err= 0: pid=579105: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=163, BW=163MiB/s (171MB/s)(1683MiB/10304msec) 00:20:46.381 slat (usec): min=56, max=1982.2k, avg=6027.24, stdev=54614.38 00:20:46.381 clat (msec): min=150, max=4382, avg=749.48, stdev=611.97 00:20:46.381 lat (msec): min=272, max=4386, avg=755.51, stdev=613.79 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 275], 5.00th=[ 284], 10.00th=[ 309], 20.00th=[ 355], 00:20:46.381 | 30.00th=[ 380], 40.00th=[ 418], 50.00th=[ 456], 60.00th=[ 506], 00:20:46.381 | 70.00th=[ 802], 80.00th=[ 1284], 90.00th=[ 1485], 95.00th=[ 2366], 00:20:46.381 | 99.00th=[ 2500], 99.50th=[ 2534], 99.90th=[ 4396], 99.95th=[ 4396], 00:20:46.381 | 99.99th=[ 4396] 00:20:46.381 bw ( KiB/s): min=12288, max=434176, per=7.91%, avg=212309.33, stdev=135817.34, samples=15 00:20:46.381 iops : min= 12, max= 424, avg=207.33, stdev=132.63, samples=15 00:20:46.381 lat (msec) : 250=0.06%, 500=58.70%, 750=9.45%, 1000=9.09%, 2000=15.15% 00:20:46.381 lat (msec) : >=2000=7.55% 00:20:46.381 cpu : usr=0.07%, sys=2.12%, ctx=1150, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.3% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.381 issued rwts: total=1683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.381 job5: (groupid=0, jobs=1): err= 0: pid=579106: Wed May 15 00:07:13 2024 00:20:46.381 read: IOPS=109, BW=110MiB/s (115MB/s)(1104MiB/10051msec) 00:20:46.381 slat (usec): min=49, max=1978.6k, avg=9056.78, stdev=93075.84 00:20:46.381 clat (msec): min=44, max=4788, avg=963.53, stdev=1276.60 00:20:46.381 lat (msec): min=51, max=4793, avg=972.59, stdev=1282.05 00:20:46.381 clat percentiles (msec): 00:20:46.381 | 1.00th=[ 94], 5.00th=[ 243], 10.00th=[ 338], 20.00th=[ 405], 00:20:46.381 | 30.00th=[ 422], 40.00th=[ 439], 50.00th=[ 456], 60.00th=[ 477], 00:20:46.381 | 70.00th=[ 542], 80.00th=[ 726], 90.00th=[ 2735], 95.00th=[ 4665], 00:20:46.381 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:20:46.381 | 99.99th=[ 4799] 00:20:46.381 bw ( KiB/s): min=20439, max=314762, per=7.45%, avg=200120.80, stdev=107353.64, samples=10 00:20:46.381 iops : min= 19, max= 307, avg=195.20, stdev=104.94, samples=10 00:20:46.381 lat (msec) : 50=0.09%, 100=1.27%, 250=3.80%, 500=61.32%, 750=16.12% 00:20:46.381 lat (msec) : 1000=2.08%, >=2000=15.31% 00:20:46.381 cpu : usr=0.09%, sys=1.44%, ctx=877, majf=0, minf=32769 00:20:46.381 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:20:46.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.381 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.382 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579108: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=111, BW=112MiB/s (117MB/s)(1126MiB/10095msec) 00:20:46.382 slat (usec): min=44, max=1972.0k, avg=8874.46, stdev=86575.62 00:20:46.382 clat (msec): min=92, max=5043, avg=1026.35, stdev=1348.21 00:20:46.382 lat (msec): min=95, max=5043, avg=1035.22, stdev=1354.18 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 224], 20.00th=[ 296], 00:20:46.382 | 30.00th=[ 338], 40.00th=[ 418], 50.00th=[ 451], 60.00th=[ 468], 00:20:46.382 | 70.00th=[ 584], 80.00th=[ 1028], 90.00th=[ 3004], 95.00th=[ 4799], 00:20:46.382 | 99.00th=[ 4933], 99.50th=[ 5000], 99.90th=[ 5067], 99.95th=[ 5067], 00:20:46.382 | 99.99th=[ 5067] 00:20:46.382 bw ( KiB/s): min=22483, max=413696, per=6.93%, avg=185984.82, stdev=144470.99, samples=11 00:20:46.382 iops : min= 21, max= 404, avg=181.45, stdev=141.29, samples=11 00:20:46.382 lat (msec) : 100=0.18%, 250=15.72%, 500=49.38%, 750=10.04%, 1000=3.46% 00:20:46.382 lat (msec) : 2000=1.33%, >=2000=19.89% 00:20:46.382 cpu : usr=0.07%, sys=1.97%, ctx=903, majf=0, minf=32769 00:20:46.382 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:20:46.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.382 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.382 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579109: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=67, BW=67.5MiB/s (70.8MB/s)(680MiB/10077msec) 00:20:46.382 slat (usec): min=42, max=2210.6k, avg=14703.75, stdev=127547.38 00:20:46.382 clat (msec): min=73, max=4613, avg=1301.36, stdev=1173.70 00:20:46.382 lat (msec): min=76, max=4618, avg=1316.06, stdev=1182.25 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 122], 5.00th=[ 300], 10.00th=[ 405], 20.00th=[ 498], 00:20:46.382 | 30.00th=[ 584], 40.00th=[ 667], 50.00th=[ 718], 60.00th=[ 860], 00:20:46.382 | 70.00th=[ 1150], 80.00th=[ 2869], 90.00th=[ 3071], 95.00th=[ 3339], 00:20:46.382 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:20:46.382 | 99.99th=[ 4597] 00:20:46.382 bw ( KiB/s): min=90112, max=260096, per=6.02%, avg=161792.00, stdev=55636.29, samples=7 00:20:46.382 iops : min= 88, max= 254, avg=158.00, stdev=54.33, samples=7 00:20:46.382 lat (msec) : 100=0.29%, 250=4.26%, 500=15.74%, 750=33.97%, 1000=10.15% 00:20:46.382 lat (msec) : 2000=11.03%, >=2000=24.56% 00:20:46.382 cpu : usr=0.01%, sys=1.40%, ctx=559, majf=0, minf=32769 00:20:46.382 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:20:46.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.382 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:46.382 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579110: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=85, BW=85.1MiB/s (89.2MB/s)(857MiB/10072msec) 00:20:46.382 slat (usec): min=43, max=2068.1k, avg=11667.72, stdev=100204.64 00:20:46.382 clat (msec): min=64, max=5997, avg=974.85, stdev=948.60 00:20:46.382 lat (msec): min=71, max=5998, avg=986.52, stdev=963.14 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 165], 5.00th=[ 397], 10.00th=[ 477], 20.00th=[ 558], 00:20:46.382 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 718], 60.00th=[ 751], 00:20:46.382 | 70.00th=[ 793], 80.00th=[ 894], 90.00th=[ 2072], 95.00th=[ 2232], 00:20:46.382 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:20:46.382 | 99.99th=[ 6007] 00:20:46.382 bw ( KiB/s): min=18432, max=288191, per=6.18%, avg=166025.44, stdev=85621.12, samples=9 00:20:46.382 iops : min= 18, max= 281, avg=162.00, stdev=83.59, samples=9 00:20:46.382 lat (msec) : 100=0.58%, 250=1.63%, 500=12.14%, 750=44.57%, 1000=24.04% 00:20:46.382 lat (msec) : 2000=3.73%, >=2000=13.30% 00:20:46.382 cpu : usr=0.05%, sys=1.69%, ctx=736, majf=0, minf=32769 00:20:46.382 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.6% 00:20:46.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.382 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.382 issued rwts: total=857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579111: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=25, BW=25.0MiB/s (26.2MB/s)(252MiB/10072msec) 00:20:46.382 slat (usec): min=55, max=2009.9k, avg=39679.45, stdev=231767.52 00:20:46.382 clat (msec): min=70, max=8757, avg=2264.81, stdev=2534.04 00:20:46.382 lat (msec): min=77, max=8759, avg=2304.49, stdev=2564.06 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 85], 5.00th=[ 236], 10.00th=[ 409], 20.00th=[ 684], 00:20:46.382 | 30.00th=[ 919], 40.00th=[ 1150], 50.00th=[ 1234], 60.00th=[ 1267], 00:20:46.382 | 70.00th=[ 1351], 80.00th=[ 3171], 90.00th=[ 7148], 95.00th=[ 8658], 00:20:46.382 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:20:46.382 | 99.99th=[ 8792] 00:20:46.382 bw ( KiB/s): min=69632, max=102400, per=3.18%, avg=85333.33, stdev=16426.61, samples=3 00:20:46.382 iops : min= 68, max= 100, avg=83.33, stdev=16.04, samples=3 00:20:46.382 lat (msec) : 100=1.19%, 250=4.37%, 500=7.54%, 750=8.73%, 1000=8.33% 00:20:46.382 lat (msec) : 2000=44.05%, >=2000=25.79% 00:20:46.382 cpu : usr=0.01%, sys=1.06%, ctx=365, majf=0, minf=32769 00:20:46.382 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:20:46.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.382 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:20:46.382 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579112: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=134, BW=135MiB/s (141MB/s)(1386MiB/10288msec) 00:20:46.382 slat (usec): min=55, max=2029.2k, avg=7307.96, stdev=71825.31 00:20:46.382 clat (msec): min=150, max=2896, avg=808.96, stdev=704.02 00:20:46.382 lat (msec): min=298, max=2898, avg=816.27, stdev=707.10 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 309], 5.00th=[ 317], 10.00th=[ 321], 20.00th=[ 338], 00:20:46.382 | 30.00th=[ 359], 40.00th=[ 380], 50.00th=[ 489], 60.00th=[ 676], 00:20:46.382 | 70.00th=[ 869], 80.00th=[ 1083], 90.00th=[ 2106], 95.00th=[ 2668], 00:20:46.382 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:20:46.382 | 99.99th=[ 2903] 00:20:46.382 bw ( KiB/s): min=133120, max=409600, per=8.72%, avg=234216.73, stdev=104877.89, samples=11 00:20:46.382 iops : min= 130, max= 400, avg=228.73, stdev=102.42, samples=11 00:20:46.382 lat (msec) : 250=0.07%, 500=50.14%, 750=11.47%, 1000=17.97%, 2000=9.16% 00:20:46.382 lat (msec) : >=2000=11.18% 00:20:46.382 cpu : usr=0.10%, sys=1.62%, ctx=1037, majf=0, minf=32769 00:20:46.382 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:20:46.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.382 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.382 issued rwts: total=1386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.382 job5: (groupid=0, jobs=1): err= 0: pid=579113: Wed May 15 00:07:13 2024 00:20:46.382 read: IOPS=41, BW=41.2MiB/s (43.2MB/s)(414MiB/10056msec) 00:20:46.382 slat (usec): min=53, max=3806.1k, avg=24152.22, stdev=218600.55 00:20:46.382 clat (msec): min=53, max=8199, avg=2147.45, stdev=2542.14 00:20:46.382 lat (msec): min=68, max=8386, avg=2171.60, stdev=2558.78 00:20:46.382 clat percentiles (msec): 00:20:46.382 | 1.00th=[ 94], 5.00th=[ 213], 10.00th=[ 372], 20.00th=[ 575], 00:20:46.382 | 30.00th=[ 751], 40.00th=[ 776], 50.00th=[ 793], 60.00th=[ 818], 00:20:46.382 | 70.00th=[ 869], 80.00th=[ 4665], 90.00th=[ 6678], 95.00th=[ 6745], 00:20:46.382 | 99.00th=[ 8154], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:20:46.382 | 99.99th=[ 8221] 00:20:46.383 bw ( KiB/s): min=110592, max=163840, per=5.47%, avg=146944.00, stdev=24823.64, samples=4 00:20:46.383 iops : min= 108, max= 160, avg=143.50, stdev=24.24, samples=4 00:20:46.383 lat (msec) : 100=1.45%, 250=4.59%, 500=9.18%, 750=15.70%, 1000=42.27% 00:20:46.383 lat (msec) : >=2000=26.81% 00:20:46.383 cpu : usr=0.04%, sys=1.12%, ctx=506, majf=0, minf=32769 00:20:46.383 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:20:46.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.383 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:46.383 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.383 job5: (groupid=0, jobs=1): err= 0: pid=579114: Wed May 15 00:07:13 2024 00:20:46.383 read: IOPS=38, BW=39.0MiB/s (40.9MB/s)(392MiB/10054msec) 00:20:46.383 slat (usec): min=51, max=4291.4k, avg=25507.82, stdev=246380.48 00:20:46.383 clat (msec): min=50, max=8638, avg=1364.72, stdev=2164.15 00:20:46.383 lat (msec): min=53, max=8751, avg=1390.23, stdev=2195.64 00:20:46.383 clat percentiles (msec): 00:20:46.383 | 1.00th=[ 55], 5.00th=[ 62], 10.00th=[ 174], 20.00th=[ 451], 00:20:46.383 | 30.00th=[ 550], 40.00th=[ 609], 50.00th=[ 642], 60.00th=[ 659], 00:20:46.383 | 70.00th=[ 676], 80.00th=[ 718], 90.00th=[ 6946], 95.00th=[ 7013], 00:20:46.383 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:46.383 | 99.99th=[ 8658] 00:20:46.383 bw ( KiB/s): min=137216, max=231424, per=6.74%, avg=180906.67, stdev=47473.56, samples=3 00:20:46.383 iops : min= 134, max= 226, avg=176.67, stdev=46.36, samples=3 00:20:46.383 lat (msec) : 100=7.14%, 250=5.61%, 500=8.93%, 750=61.48%, 1000=3.57% 00:20:46.383 lat (msec) : >=2000=13.27% 00:20:46.383 cpu : usr=0.03%, sys=1.09%, ctx=604, majf=0, minf=32769 00:20:46.383 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:20:46.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.383 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.383 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.383 job5: (groupid=0, jobs=1): err= 0: pid=579116: Wed May 15 00:07:13 2024 00:20:46.383 read: IOPS=39, BW=39.2MiB/s (41.1MB/s)(394MiB/10063msec) 00:20:46.383 slat (usec): min=51, max=2095.7k, avg=25379.87, stdev=177343.37 00:20:46.383 clat (msec): min=59, max=8377, avg=2027.40, stdev=2463.64 00:20:46.383 lat (msec): min=65, max=8391, avg=2052.78, stdev=2482.46 00:20:46.383 clat percentiles (msec): 00:20:46.383 | 1.00th=[ 71], 5.00th=[ 220], 10.00th=[ 435], 20.00th=[ 558], 00:20:46.383 | 30.00th=[ 592], 40.00th=[ 676], 50.00th=[ 768], 60.00th=[ 911], 00:20:46.383 | 70.00th=[ 1020], 80.00th=[ 4866], 90.00th=[ 6678], 95.00th=[ 6812], 00:20:46.383 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:20:46.383 | 99.99th=[ 8356] 00:20:46.383 bw ( KiB/s): min=61440, max=229376, per=5.09%, avg=136704.00, stdev=71482.24, samples=4 00:20:46.383 iops : min= 60, max= 224, avg=133.50, stdev=69.81, samples=4 00:20:46.383 lat (msec) : 100=2.28%, 250=4.06%, 500=5.33%, 750=35.79%, 1000=21.32% 00:20:46.383 lat (msec) : 2000=6.09%, >=2000=25.13% 00:20:46.383 cpu : usr=0.03%, sys=1.29%, ctx=656, majf=0, minf=32769 00:20:46.383 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:20:46.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.383 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:46.383 issued rwts: total=394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.383 job5: (groupid=0, jobs=1): err= 0: pid=579117: Wed May 15 00:07:13 2024 00:20:46.383 read: IOPS=170, BW=170MiB/s (179MB/s)(1743MiB/10224msec) 00:20:46.383 slat (usec): min=57, max=975658, avg=5817.37, stdev=25842.86 00:20:46.383 clat (msec): min=71, max=2278, avg=709.97, stdev=348.50 00:20:46.383 lat (msec): min=332, max=2283, avg=715.79, stdev=349.54 00:20:46.383 clat percentiles (msec): 00:20:46.383 | 1.00th=[ 347], 5.00th=[ 422], 10.00th=[ 464], 20.00th=[ 477], 00:20:46.383 | 30.00th=[ 489], 40.00th=[ 558], 50.00th=[ 642], 60.00th=[ 718], 00:20:46.383 | 70.00th=[ 760], 80.00th=[ 810], 90.00th=[ 894], 95.00th=[ 1653], 00:20:46.383 | 99.00th=[ 2165], 99.50th=[ 2232], 99.90th=[ 2265], 99.95th=[ 2265], 00:20:46.383 | 99.99th=[ 2265] 00:20:46.383 bw ( KiB/s): min=16384, max=296960, per=6.84%, avg=183751.11, stdev=81816.95, samples=18 00:20:46.383 iops : min= 16, max= 290, avg=179.44, stdev=79.90, samples=18 00:20:46.383 lat (msec) : 100=0.06%, 500=32.36%, 750=33.68%, 1000=25.47%, 2000=6.37% 00:20:46.383 lat (msec) : >=2000=2.07% 00:20:46.383 cpu : usr=0.19%, sys=2.01%, ctx=1372, majf=0, minf=32769 00:20:46.383 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:46.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.383 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.383 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.383 job5: (groupid=0, jobs=1): err= 0: pid=579118: Wed May 15 00:07:13 2024 00:20:46.383 read: IOPS=95, BW=95.9MiB/s (101MB/s)(968MiB/10092msec) 00:20:46.383 slat (usec): min=40, max=2043.9k, avg=10358.66, stdev=96095.39 00:20:46.383 clat (msec): min=57, max=4587, avg=1107.27, stdev=1075.61 00:20:46.383 lat (msec): min=110, max=4587, avg=1117.63, stdev=1081.66 00:20:46.383 clat percentiles (msec): 00:20:46.383 | 1.00th=[ 112], 5.00th=[ 351], 10.00th=[ 384], 20.00th=[ 409], 00:20:46.383 | 30.00th=[ 451], 40.00th=[ 493], 50.00th=[ 642], 60.00th=[ 760], 00:20:46.383 | 70.00th=[ 869], 80.00th=[ 2265], 90.00th=[ 2836], 95.00th=[ 2869], 00:20:46.383 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:20:46.383 | 99.99th=[ 4597] 00:20:46.383 bw ( KiB/s): min=12288, max=342016, per=6.40%, avg=171975.80, stdev=91958.73, samples=10 00:20:46.383 iops : min= 12, max= 334, avg=167.80, stdev=89.86, samples=10 00:20:46.383 lat (msec) : 100=0.10%, 250=2.79%, 500=38.33%, 750=16.22%, 1000=18.90% 00:20:46.383 lat (msec) : >=2000=23.66% 00:20:46.383 cpu : usr=0.12%, sys=1.41%, ctx=723, majf=0, minf=32769 00:20:46.383 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:20:46.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.383 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.383 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.383 00:20:46.383 Run status group 0 (all jobs): 00:20:46.383 READ: bw=2623MiB/s (2750MB/s), 1000KiB/s-170MiB/s (1024kB/s-179MB/s), io=31.7GiB (34.0GB), run=10036-12381msec 00:20:46.383 00:20:46.383 Disk stats (read/write): 00:20:46.383 nvme0n1: ios=48915/0, merge=0/0, ticks=9223935/0, in_queue=9223935, util=98.68% 00:20:46.383 nvme1n1: ios=12420/0, merge=0/0, ticks=10128127/0, in_queue=10128127, util=98.86% 00:20:46.383 nvme2n1: ios=17719/0, merge=0/0, ticks=9760672/0, in_queue=9760672, util=98.94% 00:20:46.383 nvme3n1: ios=31136/0, merge=0/0, ticks=10691087/0, in_queue=10691087, util=98.70% 00:20:46.383 nvme4n1: ios=53995/0, merge=0/0, ticks=9488544/0, in_queue=9488544, util=99.06% 00:20:46.383 nvme5n1: ios=94746/0, merge=0/0, ticks=10967458/0, in_queue=10967458, util=99.14% 00:20:46.383 00:07:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:20:46.383 00:07:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:20:46.383 00:07:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:46.383 00:07:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:20:46.641 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000000 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000000 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.641 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.898 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.898 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:46.898 00:07:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:49.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000001 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000001 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:49.448 00:07:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:51.346 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000002 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000002 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:51.346 00:07:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:53.866 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000003 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000003 00:20:53.866 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:53.867 00:07:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:55.761 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000004 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000004 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:55.761 00:07:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:58.284 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000005 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000005 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:20:58.284 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:58.285 rmmod nvme_rdma 00:20:58.285 rmmod nvme_fabrics 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 576044 ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 576044 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@946 -- # '[' -z 576044 ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # kill -0 576044 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # uname 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 576044 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # echo 'killing process with pid 576044' 00:20:58.285 killing process with pid 576044 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@965 -- # kill 576044 00:20:58.285 [2024-05-15 00:07:27.343643] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:58.285 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # wait 576044 00:20:58.285 [2024-05-15 00:07:27.392026] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:58.543 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:58.543 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:58.543 00:20:58.543 real 0m52.471s 00:20:58.543 user 3m18.234s 00:20:58.543 sys 0m11.487s 00:20:58.543 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:58.543 00:07:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:58.543 ************************************ 00:20:58.543 END TEST nvmf_srq_overwhelm 00:20:58.543 ************************************ 00:20:58.543 00:07:27 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:58.543 00:07:27 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:58.543 00:07:27 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:58.543 00:07:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:58.543 ************************************ 00:20:58.543 START TEST nvmf_shutdown 00:20:58.543 ************************************ 00:20:58.543 00:07:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:58.802 * Looking for test storage... 00:20:58.802 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.802 00:07:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:58.803 ************************************ 00:20:58.803 START TEST nvmf_shutdown_tc1 00:20:58.803 ************************************ 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.803 00:07:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.327 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:01.328 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:01.328 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:01.328 Found net devices under 0000:09:00.0: mlx_0_0 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:01.328 Found net devices under 0000:09:00.1: mlx_0_1 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:01.328 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:01.329 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:01.329 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:01.329 altname enp9s0f0np0 00:21:01.329 inet 192.168.100.8/24 scope global mlx_0_0 00:21:01.329 valid_lft forever preferred_lft forever 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:01.329 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:01.329 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:01.329 altname enp9s0f1np1 00:21:01.329 inet 192.168.100.9/24 scope global mlx_0_1 00:21:01.329 valid_lft forever preferred_lft forever 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:01.329 192.168.100.9' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:01.329 192.168.100.9' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:01.329 192.168.100.9' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=584219 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 584219 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 584219 ']' 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.329 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.329 [2024-05-15 00:07:30.541523] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:01.329 [2024-05-15 00:07:30.541613] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.329 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.329 [2024-05-15 00:07:30.611504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.587 [2024-05-15 00:07:30.723181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.587 [2024-05-15 00:07:30.723249] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.587 [2024-05-15 00:07:30.723262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.587 [2024-05-15 00:07:30.723289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.587 [2024-05-15 00:07:30.723305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.587 [2024-05-15 00:07:30.723396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.587 [2024-05-15 00:07:30.723676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.587 [2024-05-15 00:07:30.723714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:01.587 [2024-05-15 00:07:30.723717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.587 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.587 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:21:01.587 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.587 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.587 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.588 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:01.588 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.588 00:07:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 [2024-05-15 00:07:30.891711] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x227dd10/0x2282200) succeed. 00:21:01.588 [2024-05-15 00:07:30.902788] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x227f350/0x22c3890) succeed. 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.846 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.846 Malloc1 00:21:01.846 [2024-05-15 00:07:31.123733] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:01.846 [2024-05-15 00:07:31.124066] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:01.846 Malloc2 00:21:02.103 Malloc3 00:21:02.103 Malloc4 00:21:02.103 Malloc5 00:21:02.103 Malloc6 00:21:02.103 Malloc7 00:21:02.103 Malloc8 00:21:02.361 Malloc9 00:21:02.361 Malloc10 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=584400 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 584400 /var/tmp/bdevperf.sock 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 584400 ']' 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.361 { 00:21:02.361 "params": { 00:21:02.361 "name": "Nvme$subsystem", 00:21:02.361 "trtype": "$TEST_TRANSPORT", 00:21:02.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.361 "adrfam": "ipv4", 00:21:02.361 "trsvcid": "$NVMF_PORT", 00:21:02.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.361 "hdgst": ${hdgst:-false}, 00:21:02.361 "ddgst": ${ddgst:-false} 00:21:02.361 }, 00:21:02.361 "method": "bdev_nvme_attach_controller" 00:21:02.361 } 00:21:02.361 EOF 00:21:02.361 )") 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.361 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.361 { 00:21:02.361 "params": { 00:21:02.361 "name": "Nvme$subsystem", 00:21:02.361 "trtype": "$TEST_TRANSPORT", 00:21:02.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.362 { 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme$subsystem", 00:21:02.362 "trtype": "$TEST_TRANSPORT", 00:21:02.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "$NVMF_PORT", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.362 "hdgst": ${hdgst:-false}, 00:21:02.362 "ddgst": ${ddgst:-false} 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 } 00:21:02.362 EOF 00:21:02.362 )") 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:02.362 00:07:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme1", 00:21:02.362 "trtype": "rdma", 00:21:02.362 "traddr": "192.168.100.8", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "4420", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.362 "hdgst": false, 00:21:02.362 "ddgst": false 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 },{ 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme2", 00:21:02.362 "trtype": "rdma", 00:21:02.362 "traddr": "192.168.100.8", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "4420", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.362 "hdgst": false, 00:21:02.362 "ddgst": false 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 },{ 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme3", 00:21:02.362 "trtype": "rdma", 00:21:02.362 "traddr": "192.168.100.8", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "4420", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:02.362 "hdgst": false, 00:21:02.362 "ddgst": false 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 },{ 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme4", 00:21:02.362 "trtype": "rdma", 00:21:02.362 "traddr": "192.168.100.8", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "4420", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:02.362 "hdgst": false, 00:21:02.362 "ddgst": false 00:21:02.362 }, 00:21:02.362 "method": "bdev_nvme_attach_controller" 00:21:02.362 },{ 00:21:02.362 "params": { 00:21:02.362 "name": "Nvme5", 00:21:02.362 "trtype": "rdma", 00:21:02.362 "traddr": "192.168.100.8", 00:21:02.362 "adrfam": "ipv4", 00:21:02.362 "trsvcid": "4420", 00:21:02.362 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:02.362 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 },{ 00:21:02.363 "params": { 00:21:02.363 "name": "Nvme6", 00:21:02.363 "trtype": "rdma", 00:21:02.363 "traddr": "192.168.100.8", 00:21:02.363 "adrfam": "ipv4", 00:21:02.363 "trsvcid": "4420", 00:21:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:02.363 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 },{ 00:21:02.363 "params": { 00:21:02.363 "name": "Nvme7", 00:21:02.363 "trtype": "rdma", 00:21:02.363 "traddr": "192.168.100.8", 00:21:02.363 "adrfam": "ipv4", 00:21:02.363 "trsvcid": "4420", 00:21:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:02.363 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 },{ 00:21:02.363 "params": { 00:21:02.363 "name": "Nvme8", 00:21:02.363 "trtype": "rdma", 00:21:02.363 "traddr": "192.168.100.8", 00:21:02.363 "adrfam": "ipv4", 00:21:02.363 "trsvcid": "4420", 00:21:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:02.363 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 },{ 00:21:02.363 "params": { 00:21:02.363 "name": "Nvme9", 00:21:02.363 "trtype": "rdma", 00:21:02.363 "traddr": "192.168.100.8", 00:21:02.363 "adrfam": "ipv4", 00:21:02.363 "trsvcid": "4420", 00:21:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:02.363 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 },{ 00:21:02.363 "params": { 00:21:02.363 "name": "Nvme10", 00:21:02.363 "trtype": "rdma", 00:21:02.363 "traddr": "192.168.100.8", 00:21:02.363 "adrfam": "ipv4", 00:21:02.363 "trsvcid": "4420", 00:21:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:02.363 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:02.363 "hdgst": false, 00:21:02.363 "ddgst": false 00:21:02.363 }, 00:21:02.363 "method": "bdev_nvme_attach_controller" 00:21:02.363 }' 00:21:02.363 [2024-05-15 00:07:31.628669] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:02.363 [2024-05-15 00:07:31.628742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:02.363 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.363 [2024-05-15 00:07:31.702681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.621 [2024-05-15 00:07:31.813476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 584400 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:03.552 00:07:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:04.484 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 584400 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 584219 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.484 { 00:21:04.484 "params": { 00:21:04.484 "name": "Nvme$subsystem", 00:21:04.484 "trtype": "$TEST_TRANSPORT", 00:21:04.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.484 "adrfam": "ipv4", 00:21:04.484 "trsvcid": "$NVMF_PORT", 00:21:04.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.484 "hdgst": ${hdgst:-false}, 00:21:04.484 "ddgst": ${ddgst:-false} 00:21:04.484 }, 00:21:04.484 "method": "bdev_nvme_attach_controller" 00:21:04.484 } 00:21:04.484 EOF 00:21:04.484 )") 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.484 { 00:21:04.484 "params": { 00:21:04.484 "name": "Nvme$subsystem", 00:21:04.484 "trtype": "$TEST_TRANSPORT", 00:21:04.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.484 "adrfam": "ipv4", 00:21:04.484 "trsvcid": "$NVMF_PORT", 00:21:04.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.484 "hdgst": ${hdgst:-false}, 00:21:04.484 "ddgst": ${ddgst:-false} 00:21:04.484 }, 00:21:04.484 "method": "bdev_nvme_attach_controller" 00:21:04.484 } 00:21:04.484 EOF 00:21:04.484 )") 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.484 { 00:21:04.484 "params": { 00:21:04.484 "name": "Nvme$subsystem", 00:21:04.484 "trtype": "$TEST_TRANSPORT", 00:21:04.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.484 "adrfam": "ipv4", 00:21:04.484 "trsvcid": "$NVMF_PORT", 00:21:04.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.484 "hdgst": ${hdgst:-false}, 00:21:04.484 "ddgst": ${ddgst:-false} 00:21:04.484 }, 00:21:04.484 "method": "bdev_nvme_attach_controller" 00:21:04.484 } 00:21:04.484 EOF 00:21:04.484 )") 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.484 { 00:21:04.484 "params": { 00:21:04.484 "name": "Nvme$subsystem", 00:21:04.484 "trtype": "$TEST_TRANSPORT", 00:21:04.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.484 "adrfam": "ipv4", 00:21:04.484 "trsvcid": "$NVMF_PORT", 00:21:04.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.484 "hdgst": ${hdgst:-false}, 00:21:04.484 "ddgst": ${ddgst:-false} 00:21:04.484 }, 00:21:04.484 "method": "bdev_nvme_attach_controller" 00:21:04.484 } 00:21:04.484 EOF 00:21:04.484 )") 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.484 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.484 { 00:21:04.484 "params": { 00:21:04.484 "name": "Nvme$subsystem", 00:21:04.484 "trtype": "$TEST_TRANSPORT", 00:21:04.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.484 "adrfam": "ipv4", 00:21:04.484 "trsvcid": "$NVMF_PORT", 00:21:04.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.484 "hdgst": ${hdgst:-false}, 00:21:04.484 "ddgst": ${ddgst:-false} 00:21:04.484 }, 00:21:04.484 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.485 { 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme$subsystem", 00:21:04.485 "trtype": "$TEST_TRANSPORT", 00:21:04.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "$NVMF_PORT", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.485 "hdgst": ${hdgst:-false}, 00:21:04.485 "ddgst": ${ddgst:-false} 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.485 { 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme$subsystem", 00:21:04.485 "trtype": "$TEST_TRANSPORT", 00:21:04.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "$NVMF_PORT", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.485 "hdgst": ${hdgst:-false}, 00:21:04.485 "ddgst": ${ddgst:-false} 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.485 { 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme$subsystem", 00:21:04.485 "trtype": "$TEST_TRANSPORT", 00:21:04.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "$NVMF_PORT", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.485 "hdgst": ${hdgst:-false}, 00:21:04.485 "ddgst": ${ddgst:-false} 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.485 { 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme$subsystem", 00:21:04.485 "trtype": "$TEST_TRANSPORT", 00:21:04.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "$NVMF_PORT", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.485 "hdgst": ${hdgst:-false}, 00:21:04.485 "ddgst": ${ddgst:-false} 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.485 { 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme$subsystem", 00:21:04.485 "trtype": "$TEST_TRANSPORT", 00:21:04.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "$NVMF_PORT", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.485 "hdgst": ${hdgst:-false}, 00:21:04.485 "ddgst": ${ddgst:-false} 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 } 00:21:04.485 EOF 00:21:04.485 )") 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:04.485 00:07:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme1", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme2", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme3", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme4", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme5", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme6", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme7", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme8", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme9", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 },{ 00:21:04.485 "params": { 00:21:04.485 "name": "Nvme10", 00:21:04.485 "trtype": "rdma", 00:21:04.485 "traddr": "192.168.100.8", 00:21:04.485 "adrfam": "ipv4", 00:21:04.485 "trsvcid": "4420", 00:21:04.485 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:04.485 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:04.485 "hdgst": false, 00:21:04.485 "ddgst": false 00:21:04.485 }, 00:21:04.485 "method": "bdev_nvme_attach_controller" 00:21:04.485 }' 00:21:04.485 [2024-05-15 00:07:33.734386] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:04.485 [2024-05-15 00:07:33.734473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584691 ] 00:21:04.485 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.485 [2024-05-15 00:07:33.810253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.743 [2024-05-15 00:07:33.925289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.676 Running I/O for 1 seconds... 00:21:07.047 00:21:07.047 Latency(us) 00:21:07.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.047 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme1n1 : 1.19 293.27 18.33 0.00 0.00 211741.86 41748.86 251658.24 00:21:07.047 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme2n1 : 1.19 294.55 18.41 0.00 0.00 207828.44 41943.04 243891.01 00:21:07.047 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme3n1 : 1.19 322.65 20.17 0.00 0.00 187016.89 5437.06 168548.88 00:21:07.047 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme4n1 : 1.19 321.38 20.09 0.00 0.00 184735.92 12815.93 161558.38 00:21:07.047 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme5n1 : 1.20 320.78 20.05 0.00 0.00 185357.78 22622.06 145247.19 00:21:07.047 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme6n1 : 1.20 320.34 20.02 0.00 0.00 179439.06 24272.59 136703.24 00:21:07.047 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme7n1 : 1.21 331.67 20.73 0.00 0.00 173394.42 5024.43 124275.67 00:21:07.047 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme8n1 : 1.21 344.51 21.53 0.00 0.00 164381.64 5121.52 120392.06 00:21:07.047 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme9n1 : 1.20 319.01 19.94 0.00 0.00 174932.57 24855.13 128936.01 00:21:07.047 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.047 Verification LBA range: start 0x0 length 0x400 00:21:07.047 Nvme10n1 : 1.21 316.95 19.81 0.00 0.00 173093.17 5606.97 191850.57 00:21:07.047 =================================================================================================================== 00:21:07.047 Total : 3185.11 199.07 0.00 0.00 183525.30 5024.43 251658.24 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.047 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:07.047 rmmod nvme_rdma 00:21:07.047 rmmod nvme_fabrics 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 584219 ']' 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 584219 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 584219 ']' 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 584219 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 584219 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 584219' 00:21:07.304 killing process with pid 584219 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 584219 00:21:07.304 [2024-05-15 00:07:36.433021] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:07.304 00:07:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 584219 00:21:07.304 [2024-05-15 00:07:36.516984] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:07.897 00:21:07.897 real 0m9.092s 00:21:07.897 user 0m28.979s 00:21:07.897 sys 0m2.867s 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:07.897 ************************************ 00:21:07.897 END TEST nvmf_shutdown_tc1 00:21:07.897 ************************************ 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.897 ************************************ 00:21:07.897 START TEST nvmf_shutdown_tc2 00:21:07.897 ************************************ 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.897 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:07.898 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:07.898 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:07.898 Found net devices under 0000:09:00.0: mlx_0_0 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:07.898 Found net devices under 0000:09:00.1: mlx_0_1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:07.898 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.898 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:07.898 altname enp9s0f0np0 00:21:07.898 inet 192.168.100.8/24 scope global mlx_0_0 00:21:07.898 valid_lft forever preferred_lft forever 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:07.898 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:07.898 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.898 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:07.898 altname enp9s0f1np1 00:21:07.899 inet 192.168.100.9/24 scope global mlx_0_1 00:21:07.899 valid_lft forever preferred_lft forever 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:07.899 192.168.100.9' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:07.899 192.168.100.9' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:07.899 192.168.100.9' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:07.899 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:08.155 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=585196 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 585196 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 585196 ']' 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:08.156 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.156 [2024-05-15 00:07:37.293846] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:08.156 [2024-05-15 00:07:37.293940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.156 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.156 [2024-05-15 00:07:37.376478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.156 [2024-05-15 00:07:37.498913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.156 [2024-05-15 00:07:37.498981] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.156 [2024-05-15 00:07:37.498998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.156 [2024-05-15 00:07:37.499011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.156 [2024-05-15 00:07:37.499023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.156 [2024-05-15 00:07:37.499114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.156 [2024-05-15 00:07:37.502963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.156 [2024-05-15 00:07:37.503079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.156 [2024-05-15 00:07:37.503084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.413 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.413 [2024-05-15 00:07:37.680717] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c5d10/0x20ca200) succeed. 00:21:08.413 [2024-05-15 00:07:37.691567] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20c7350/0x210b890) succeed. 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.670 00:07:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.670 Malloc1 00:21:08.670 [2024-05-15 00:07:37.907688] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:08.670 [2024-05-15 00:07:37.908007] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:08.670 Malloc2 00:21:08.671 Malloc3 00:21:08.928 Malloc4 00:21:08.928 Malloc5 00:21:08.928 Malloc6 00:21:08.928 Malloc7 00:21:08.928 Malloc8 00:21:08.928 Malloc9 00:21:09.186 Malloc10 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=585376 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 585376 /var/tmp/bdevperf.sock 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 585376 ']' 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.186 { 00:21:09.186 "params": { 00:21:09.186 "name": "Nvme$subsystem", 00:21:09.186 "trtype": "$TEST_TRANSPORT", 00:21:09.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.186 "adrfam": "ipv4", 00:21:09.186 "trsvcid": "$NVMF_PORT", 00:21:09.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.186 "hdgst": ${hdgst:-false}, 00:21:09.186 "ddgst": ${ddgst:-false} 00:21:09.186 }, 00:21:09.186 "method": "bdev_nvme_attach_controller" 00:21:09.186 } 00:21:09.186 EOF 00:21:09.186 )") 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.186 { 00:21:09.186 "params": { 00:21:09.186 "name": "Nvme$subsystem", 00:21:09.186 "trtype": "$TEST_TRANSPORT", 00:21:09.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.186 "adrfam": "ipv4", 00:21:09.186 "trsvcid": "$NVMF_PORT", 00:21:09.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.186 "hdgst": ${hdgst:-false}, 00:21:09.186 "ddgst": ${ddgst:-false} 00:21:09.186 }, 00:21:09.186 "method": "bdev_nvme_attach_controller" 00:21:09.186 } 00:21:09.186 EOF 00:21:09.186 )") 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.186 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.186 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.187 { 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme$subsystem", 00:21:09.187 "trtype": "$TEST_TRANSPORT", 00:21:09.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "$NVMF_PORT", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.187 "hdgst": ${hdgst:-false}, 00:21:09.187 "ddgst": ${ddgst:-false} 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 } 00:21:09.187 EOF 00:21:09.187 )") 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:09.187 00:07:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme1", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme2", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme3", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme4", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme5", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme6", 00:21:09.187 "trtype": "rdma", 00:21:09.187 "traddr": "192.168.100.8", 00:21:09.187 "adrfam": "ipv4", 00:21:09.187 "trsvcid": "4420", 00:21:09.187 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:09.187 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:09.187 "hdgst": false, 00:21:09.187 "ddgst": false 00:21:09.187 }, 00:21:09.187 "method": "bdev_nvme_attach_controller" 00:21:09.187 },{ 00:21:09.187 "params": { 00:21:09.187 "name": "Nvme7", 00:21:09.188 "trtype": "rdma", 00:21:09.188 "traddr": "192.168.100.8", 00:21:09.188 "adrfam": "ipv4", 00:21:09.188 "trsvcid": "4420", 00:21:09.188 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:09.188 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:09.188 "hdgst": false, 00:21:09.188 "ddgst": false 00:21:09.188 }, 00:21:09.188 "method": "bdev_nvme_attach_controller" 00:21:09.188 },{ 00:21:09.188 "params": { 00:21:09.188 "name": "Nvme8", 00:21:09.188 "trtype": "rdma", 00:21:09.188 "traddr": "192.168.100.8", 00:21:09.188 "adrfam": "ipv4", 00:21:09.188 "trsvcid": "4420", 00:21:09.188 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:09.188 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:09.188 "hdgst": false, 00:21:09.188 "ddgst": false 00:21:09.188 }, 00:21:09.188 "method": "bdev_nvme_attach_controller" 00:21:09.188 },{ 00:21:09.188 "params": { 00:21:09.188 "name": "Nvme9", 00:21:09.188 "trtype": "rdma", 00:21:09.188 "traddr": "192.168.100.8", 00:21:09.188 "adrfam": "ipv4", 00:21:09.188 "trsvcid": "4420", 00:21:09.188 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:09.188 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:09.188 "hdgst": false, 00:21:09.188 "ddgst": false 00:21:09.188 }, 00:21:09.188 "method": "bdev_nvme_attach_controller" 00:21:09.188 },{ 00:21:09.188 "params": { 00:21:09.188 "name": "Nvme10", 00:21:09.188 "trtype": "rdma", 00:21:09.188 "traddr": "192.168.100.8", 00:21:09.188 "adrfam": "ipv4", 00:21:09.188 "trsvcid": "4420", 00:21:09.188 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:09.188 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:09.188 "hdgst": false, 00:21:09.188 "ddgst": false 00:21:09.188 }, 00:21:09.188 "method": "bdev_nvme_attach_controller" 00:21:09.188 }' 00:21:09.188 [2024-05-15 00:07:38.401129] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:09.188 [2024-05-15 00:07:38.401208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585376 ] 00:21:09.188 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.188 [2024-05-15 00:07:38.478095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.445 [2024-05-15 00:07:38.588457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.377 Running I/O for 10 seconds... 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.377 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.635 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.635 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:10.635 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:10.635 00:07:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=123 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 123 -ge 100 ']' 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 585376 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 585376 ']' 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 585376 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 585376 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 585376' 00:21:10.893 killing process with pid 585376 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 585376 00:21:10.893 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 585376 00:21:11.150 Received shutdown signal, test time was about 0.805135 seconds 00:21:11.150 00:21:11.150 Latency(us) 00:21:11.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.150 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme1n1 : 0.78 316.09 19.76 0.00 0.00 197678.54 8641.04 259425.47 00:21:11.150 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme2n1 : 0.79 325.53 20.35 0.00 0.00 187849.01 11747.93 191073.85 00:21:11.150 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme3n1 : 0.79 327.37 20.46 0.00 0.00 182206.92 3956.43 183306.62 00:21:11.150 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme4n1 : 0.79 329.21 20.58 0.00 0.00 176801.42 5072.97 174762.67 00:21:11.150 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme5n1 : 0.79 323.34 20.21 0.00 0.00 176408.65 13204.29 163111.82 00:21:11.150 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme6n1 : 0.79 322.54 20.16 0.00 0.00 172480.66 14078.10 151460.98 00:21:11.150 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme7n1 : 0.80 321.88 20.12 0.00 0.00 167530.38 14466.47 142917.03 00:21:11.150 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme8n1 : 0.80 321.22 20.08 0.00 0.00 163553.47 14854.83 134373.07 00:21:11.150 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme9n1 : 0.80 320.28 20.02 0.00 0.00 161381.26 15922.82 117285.17 00:21:11.150 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.150 Verification LBA range: start 0x0 length 0x400 00:21:11.150 Nvme10n1 : 0.80 238.72 14.92 0.00 0.00 209859.19 3932.16 279620.27 00:21:11.150 =================================================================================================================== 00:21:11.150 Total : 3146.20 196.64 0.00 0.00 178737.40 3932.16 279620.27 00:21:11.407 00:07:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 585196 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.339 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:12.339 rmmod nvme_rdma 00:21:12.596 rmmod nvme_fabrics 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 585196 ']' 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 585196 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 585196 ']' 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 585196 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 585196 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 585196' 00:21:12.597 killing process with pid 585196 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 585196 00:21:12.597 [2024-05-15 00:07:41.734262] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:12.597 00:07:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 585196 00:21:12.597 [2024-05-15 00:07:41.819314] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:13.165 00:21:13.165 real 0m5.228s 00:21:13.165 user 0m21.103s 00:21:13.165 sys 0m1.080s 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 ************************************ 00:21:13.165 END TEST nvmf_shutdown_tc2 00:21:13.165 ************************************ 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 ************************************ 00:21:13.165 START TEST nvmf_shutdown_tc3 00:21:13.165 ************************************ 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:13.165 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:13.165 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.165 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:13.166 Found net devices under 0000:09:00.0: mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:13.166 Found net devices under 0000:09:00.1: mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:13.166 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:13.166 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:13.166 altname enp9s0f0np0 00:21:13.166 inet 192.168.100.8/24 scope global mlx_0_0 00:21:13.166 valid_lft forever preferred_lft forever 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:13.166 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:13.166 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:13.166 altname enp9s0f1np1 00:21:13.166 inet 192.168.100.9/24 scope global mlx_0_1 00:21:13.166 valid_lft forever preferred_lft forever 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:13.166 192.168.100.9' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:13.166 192.168.100.9' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:13.166 192.168.100.9' 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:21:13.166 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=586019 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 586019 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 586019 ']' 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.425 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.425 [2024-05-15 00:07:42.573882] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:13.425 [2024-05-15 00:07:42.574001] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.425 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.425 [2024-05-15 00:07:42.643681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.425 [2024-05-15 00:07:42.755595] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.425 [2024-05-15 00:07:42.755651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.425 [2024-05-15 00:07:42.755672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.425 [2024-05-15 00:07:42.755689] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.425 [2024-05-15 00:07:42.755705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.425 [2024-05-15 00:07:42.755796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.425 [2024-05-15 00:07:42.755860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.425 [2024-05-15 00:07:42.755940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:13.425 [2024-05-15 00:07:42.755957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.683 00:07:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.683 [2024-05-15 00:07:42.936902] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x96bd10/0x970200) succeed. 00:21:13.683 [2024-05-15 00:07:42.947942] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x96d350/0x9b1890) succeed. 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 Malloc1 00:21:13.941 [2024-05-15 00:07:43.171007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:13.941 [2024-05-15 00:07:43.171338] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:13.941 Malloc2 00:21:13.941 Malloc3 00:21:14.198 Malloc4 00:21:14.198 Malloc5 00:21:14.198 Malloc6 00:21:14.198 Malloc7 00:21:14.198 Malloc8 00:21:14.198 Malloc9 00:21:14.456 Malloc10 00:21:14.456 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=586199 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 586199 /var/tmp/bdevperf.sock 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 586199 ']' 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.457 "trsvcid": "$NVMF_PORT", 00:21:14.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.457 "hdgst": ${hdgst:-false}, 00:21:14.457 "ddgst": ${ddgst:-false} 00:21:14.457 }, 00:21:14.457 "method": "bdev_nvme_attach_controller" 00:21:14.457 } 00:21:14.457 EOF 00:21:14.457 )") 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.457 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.457 { 00:21:14.457 "params": { 00:21:14.457 "name": "Nvme$subsystem", 00:21:14.457 "trtype": "$TEST_TRANSPORT", 00:21:14.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.457 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "$NVMF_PORT", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.458 "hdgst": ${hdgst:-false}, 00:21:14.458 "ddgst": ${ddgst:-false} 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 } 00:21:14.458 EOF 00:21:14.458 )") 00:21:14.458 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:14.458 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:14.458 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:14.458 00:07:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme1", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme2", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme3", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme4", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme5", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme6", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme7", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme8", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme9", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 },{ 00:21:14.458 "params": { 00:21:14.458 "name": "Nvme10", 00:21:14.458 "trtype": "rdma", 00:21:14.458 "traddr": "192.168.100.8", 00:21:14.458 "adrfam": "ipv4", 00:21:14.458 "trsvcid": "4420", 00:21:14.458 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:14.458 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:14.458 "hdgst": false, 00:21:14.458 "ddgst": false 00:21:14.458 }, 00:21:14.458 "method": "bdev_nvme_attach_controller" 00:21:14.458 }' 00:21:14.458 [2024-05-15 00:07:43.666021] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:14.458 [2024-05-15 00:07:43.666098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586199 ] 00:21:14.458 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.458 [2024-05-15 00:07:43.740763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.715 [2024-05-15 00:07:43.851521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.648 Running I/O for 10 seconds... 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.648 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.906 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:15.906 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:15.906 00:07:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:15.906 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:15.907 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:15.907 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:15.907 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:15.907 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.907 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.164 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.164 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=83 00:21:16.164 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 83 -ge 100 ']' 00:21:16.164 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.423 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=211 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 586019 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 586019 ']' 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 586019 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 586019 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 586019' 00:21:16.681 killing process with pid 586019 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 586019 00:21:16.681 [2024-05-15 00:07:45.806652] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:16.681 00:07:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 586019 00:21:16.681 [2024-05-15 00:07:45.916758] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:17.246 00:07:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:17.246 00:07:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:17.821 [2024-05-15 00:07:46.908113] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:21:17.821 [2024-05-15 00:07:46.909686] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:21:17.821 [2024-05-15 00:07:46.911289] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:21:17.821 [2024-05-15 00:07:46.912928] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:21:17.821 [2024-05-15 00:07:46.912990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.821 [2024-05-15 00:07:46.913531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x28945 00:21:17.821 [2024-05-15 00:07:46.913546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x28945 00:21:17.822 [2024-05-15 00:07:46.913841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.913873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.913906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.913947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.913966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.913988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x187300 00:21:17.822 [2024-05-15 00:07:46.914594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x188f00 00:21:17.822 [2024-05-15 00:07:46.914626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x187800 00:21:17.822 [2024-05-15 00:07:46.914659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x187800 00:21:17.822 [2024-05-15 00:07:46.914692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011112000 len:0x10000 key:0x187800 00:21:17.822 [2024-05-15 00:07:46.914724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011133000 len:0x10000 key:0x187800 00:21:17.822 [2024-05-15 00:07:46.914756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011154000 len:0x10000 key:0x187800 00:21:17.822 [2024-05-15 00:07:46.914799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.822 [2024-05-15 00:07:46.914817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011175000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.914832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.914850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011196000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.914865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.914882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111b7000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.914914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111d8000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.914938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.914958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111f9000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.914981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.914998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001121a000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.915031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001123b000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.915069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001125c000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001127d000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.915134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.915171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x187800 00:21:17.823 [2024-05-15 00:07:46.915187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:5b30 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916612] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:21:17.823 [2024-05-15 00:07:46.916658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x187a00 00:21:17.823 [2024-05-15 00:07:46.916680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x187a00 00:21:17.823 [2024-05-15 00:07:46.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x187300 00:21:17.823 [2024-05-15 00:07:46.917041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x187b00 00:21:17.823 [2024-05-15 00:07:46.917529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.823 [2024-05-15 00:07:46.917546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.917999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.918014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.918047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x187b00 00:21:17.824 [2024-05-15 00:07:46.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x187f00 00:21:17.824 [2024-05-15 00:07:46.918389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x187a00 00:21:17.824 [2024-05-15 00:07:46.918421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.824 [2024-05-15 00:07:46.918802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x187800 00:21:17.824 [2024-05-15 00:07:46.918817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:83f0 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920349] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:21:17.825 [2024-05-15 00:07:46.920386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x187d00 00:21:17.825 [2024-05-15 00:07:46.920788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.920821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.920858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.920891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.920924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.920968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.920992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.825 [2024-05-15 00:07:46.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x188100 00:21:17.825 [2024-05-15 00:07:46.921410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x188100 00:21:17.826 [2024-05-15 00:07:46.921711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x187f00 00:21:17.826 [2024-05-15 00:07:46.921743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b24000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b45000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.921977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b66000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.921993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b87000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132f9000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132d8000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.922546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x187800 00:21:17.826 [2024-05-15 00:07:46.922561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:2f30 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.924049] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:21:17.826 [2024-05-15 00:07:46.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x188900 00:21:17.826 [2024-05-15 00:07:46.924117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.924143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x188900 00:21:17.826 [2024-05-15 00:07:46.924160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.924178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x188900 00:21:17.826 [2024-05-15 00:07:46.924194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.826 [2024-05-15 00:07:46.924212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x188900 00:21:17.827 [2024-05-15 00:07:46.924925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.924970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.924989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x188400 00:21:17.827 [2024-05-15 00:07:46.925406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x188300 00:21:17.827 [2024-05-15 00:07:46.925443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.827 [2024-05-15 00:07:46.925460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cb0000 len:0x10000 key:0x187800 00:21:17.827 [2024-05-15 00:07:46.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cd1000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cf2000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d13000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d34000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d55000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d76000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d97000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000136d7000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000136b6000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.925908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.925925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x187800 00:21:17.828 [2024-05-15 00:07:46.926306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:57d0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.927968] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:21:17.828 [2024-05-15 00:07:46.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x188a00 00:21:17.828 [2024-05-15 00:07:46.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.828 [2024-05-15 00:07:46.928493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.928964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.928985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.929018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x188a00 00:21:17.829 [2024-05-15 00:07:46.929054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.829 [2024-05-15 00:07:46.929731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x188000 00:21:17.829 [2024-05-15 00:07:46.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.929983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.929998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.930015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.930054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.930069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.930086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x188000 00:21:17.830 [2024-05-15 00:07:46.930101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.930118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.930133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.930151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x188400 00:21:17.830 [2024-05-15 00:07:46.930166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:80b0 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931746] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:21:17.830 [2024-05-15 00:07:46.931798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.931821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.931864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.931898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.931938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.931974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.931992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x188600 00:21:17.830 [2024-05-15 00:07:46.932238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x188e00 00:21:17.830 [2024-05-15 00:07:46.932430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.830 [2024-05-15 00:07:46.932447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.932983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x188e00 00:21:17.831 [2024-05-15 00:07:46.933253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.831 [2024-05-15 00:07:46.933656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x188700 00:21:17.831 [2024-05-15 00:07:46.933675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x188700 00:21:17.832 [2024-05-15 00:07:46.933867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.933884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x188600 00:21:17.832 [2024-05-15 00:07:46.933899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:4360 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.935901] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:21:17.832 [2024-05-15 00:07:46.935988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.936012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.936029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.936043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.936059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.936073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.936088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.936108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.832 [2024-05-15 00:07:46.937643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.937669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:17.832 [2024-05-15 00:07:46.937685] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.937710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.937729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.937746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.937760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.937775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.937790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.937805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.937820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.939256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.939281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:17.832 [2024-05-15 00:07:46.939297] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.939320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.939339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.939355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.939370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.939385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.939399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.939414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.940719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.940744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:17.832 [2024-05-15 00:07:46.940760] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.940782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.940806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.940823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.940837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.940852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.940867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.940882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.940896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.942251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.942276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:17.832 [2024-05-15 00:07:46.942291] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.942314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.942332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.942348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.942362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.942377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.942391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.942406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.942421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.943681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.943706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:17.832 [2024-05-15 00:07:46.943721] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.943745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.943764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.943780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.943794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.943810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.943829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.943845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.943860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.945129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.832 [2024-05-15 00:07:46.945154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.832 [2024-05-15 00:07:46.945169] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.832 [2024-05-15 00:07:46.945191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.832 [2024-05-15 00:07:46.945210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.832 [2024-05-15 00:07:46.945225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.945240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.945255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.945269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.945284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.945298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.946513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.833 [2024-05-15 00:07:46.946537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:17.833 [2024-05-15 00:07:46.946552] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.946575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.946609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.946639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.946653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.946668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.946682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.947951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.833 [2024-05-15 00:07:46.947981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:17.833 [2024-05-15 00:07:46.947996] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.948019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.948038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.948054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.948084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.948098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.948114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.948128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.949481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.833 [2024-05-15 00:07:46.949515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:17.833 [2024-05-15 00:07:46.949533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.949559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.949579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.949596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.949610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.949626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.949640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.949655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.833 [2024-05-15 00:07:46.949669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57593 cdw0:0 sqhd:f000 p:0 m:1 dnr:0 00:21:17.833 [2024-05-15 00:07:46.968173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:17.833 [2024-05-15 00:07:46.968202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:17.833 [2024-05-15 00:07:46.968219] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980306] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980331] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980351] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980372] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980394] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980418] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980437] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.833 [2024-05-15 00:07:46.980557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:17.833 [2024-05-15 00:07:46.980625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:17.833 [2024-05-15 00:07:46.983536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:17.833 task offset: 43008 on job bdev=Nvme1n1 fails 00:21:17.833 00:21:17.833 Latency(us) 00:21:17.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.833 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme1n1 ended in about 2.21 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme1n1 : 2.21 144.47 9.03 28.89 0.00 367320.18 51263.72 1056343.23 00:21:17.833 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme2n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme2n1 : 2.22 145.30 9.08 28.88 0.00 362694.79 7184.69 1056343.23 00:21:17.833 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme3n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme3n1 : 2.22 158.75 9.92 28.86 0.00 333997.82 8009.96 1056343.23 00:21:17.833 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme4n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme4n1 : 2.22 158.66 9.92 28.85 0.00 331408.88 17573.36 1056343.23 00:21:17.833 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme5n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme5n1 : 2.22 144.16 9.01 28.83 0.00 356283.86 21845.33 1168191.34 00:21:17.833 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme6n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme6n1 : 2.22 144.09 9.01 28.82 0.00 353380.63 26991.12 1155763.77 00:21:17.833 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme7n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme7n1 : 2.22 144.01 9.00 28.80 0.00 350809.76 34175.81 1137122.42 00:21:17.833 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme8n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme8n1 : 2.22 143.93 9.00 28.79 0.00 347950.65 41748.86 1130908.63 00:21:17.833 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme9n1 ended in about 2.22 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme9n1 : 2.22 143.85 8.99 28.77 0.00 345098.11 70681.79 1118481.07 00:21:17.833 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.833 Job: Nvme10n1 ended in about 2.23 seconds with error 00:21:17.833 Verification LBA range: start 0x0 length 0x400 00:21:17.833 Nvme10n1 : 2.23 115.02 7.19 28.76 0.00 410642.51 71458.51 1099839.72 00:21:17.833 =================================================================================================================== 00:21:17.833 Total : 1442.25 90.14 288.25 0.00 354663.91 7184.69 1168191.34 00:21:17.833 [2024-05-15 00:07:47.012075] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:17.833 [2024-05-15 00:07:47.012152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:17.833 [2024-05-15 00:07:47.012189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:17.833 [2024-05-15 00:07:47.021285] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.833 [2024-05-15 00:07:47.021318] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.021334] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:17.834 [2024-05-15 00:07:47.021405] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.021428] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.021441] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:21:17.834 [2024-05-15 00:07:47.021509] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.021531] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.021543] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:21:17.834 [2024-05-15 00:07:47.024790] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.024819] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.024833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:21:17.834 [2024-05-15 00:07:47.024921] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.024952] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.024966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:21:17.834 [2024-05-15 00:07:47.025031] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.025053] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.025066] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:21:17.834 [2024-05-15 00:07:47.025142] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.025171] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.025185] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:21:17.834 [2024-05-15 00:07:47.025718] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.025744] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.025757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:21:17.834 [2024-05-15 00:07:47.025828] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.025851] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.025863] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:21:17.834 [2024-05-15 00:07:47.025941] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.834 [2024-05-15 00:07:47.025965] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.834 [2024-05-15 00:07:47.025978] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 586199 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:18.400 rmmod nvme_rdma 00:21:18.400 rmmod nvme_fabrics 00:21:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 586199 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:18.400 00:21:18.400 real 0m5.148s 00:21:18.400 user 0m18.035s 00:21:18.400 sys 0m1.205s 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.400 ************************************ 00:21:18.400 END TEST nvmf_shutdown_tc3 00:21:18.400 ************************************ 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:18.400 00:21:18.400 real 0m19.700s 00:21:18.400 user 1m8.207s 00:21:18.400 sys 0m5.303s 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:18.400 00:07:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.400 ************************************ 00:21:18.400 END TEST nvmf_shutdown 00:21:18.400 ************************************ 00:21:18.400 00:07:47 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:18.400 00:07:47 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:18.400 00:07:47 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:18.400 00:07:47 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:18.400 00:07:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:18.400 ************************************ 00:21:18.400 START TEST nvmf_multicontroller 00:21:18.400 ************************************ 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:18.400 * Looking for test storage... 00:21:18.400 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.400 00:07:47 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:18.401 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:21:18.401 00:21:18.401 real 0m0.068s 00:21:18.401 user 0m0.028s 00:21:18.401 sys 0m0.046s 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:18.401 00:07:47 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:18.401 ************************************ 00:21:18.401 END TEST nvmf_multicontroller 00:21:18.401 ************************************ 00:21:18.401 00:07:47 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:18.401 00:07:47 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:18.401 00:07:47 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:18.401 00:07:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:18.659 ************************************ 00:21:18.659 START TEST nvmf_aer 00:21:18.659 ************************************ 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:18.659 * Looking for test storage... 00:21:18.659 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:18.659 00:07:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:21.189 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:21.189 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.189 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:21.190 Found net devices under 0000:09:00.0: mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:21.190 Found net devices under 0000:09:00.1: mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:21.190 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:21.190 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:21.190 altname enp9s0f0np0 00:21:21.190 inet 192.168.100.8/24 scope global mlx_0_0 00:21:21.190 valid_lft forever preferred_lft forever 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:21.190 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:21.190 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:21.190 altname enp9s0f1np1 00:21:21.190 inet 192.168.100.9/24 scope global mlx_0_1 00:21:21.190 valid_lft forever preferred_lft forever 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:21.190 192.168.100.9' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:21.190 192.168.100.9' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:21.190 192.168.100.9' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=588698 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 588698 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 588698 ']' 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:21.190 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.191 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:21.191 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.191 [2024-05-15 00:07:50.238828] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:21.191 [2024-05-15 00:07:50.238907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.191 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.191 [2024-05-15 00:07:50.318562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.191 [2024-05-15 00:07:50.437092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.191 [2024-05-15 00:07:50.437155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.191 [2024-05-15 00:07:50.437171] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.191 [2024-05-15 00:07:50.437184] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.191 [2024-05-15 00:07:50.437196] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.191 [2024-05-15 00:07:50.437265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.191 [2024-05-15 00:07:50.437343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.191 [2024-05-15 00:07:50.437438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.191 [2024-05-15 00:07:50.437440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.449 [2024-05-15 00:07:50.603893] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1390a20/0x1394f10) succeed. 00:21:21.449 [2024-05-15 00:07:50.614887] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1392060/0x13d65a0) succeed. 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.449 Malloc0 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.449 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.753 [2024-05-15 00:07:50.801808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:21.753 [2024-05-15 00:07:50.802160] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.753 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.753 [ 00:21:21.753 { 00:21:21.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:21.753 "subtype": "Discovery", 00:21:21.753 "listen_addresses": [], 00:21:21.753 "allow_any_host": true, 00:21:21.753 "hosts": [] 00:21:21.753 }, 00:21:21.753 { 00:21:21.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.753 "subtype": "NVMe", 00:21:21.753 "listen_addresses": [ 00:21:21.753 { 00:21:21.753 "trtype": "RDMA", 00:21:21.753 "adrfam": "IPv4", 00:21:21.753 "traddr": "192.168.100.8", 00:21:21.753 "trsvcid": "4420" 00:21:21.753 } 00:21:21.753 ], 00:21:21.753 "allow_any_host": true, 00:21:21.753 "hosts": [], 00:21:21.753 "serial_number": "SPDK00000000000001", 00:21:21.753 "model_number": "SPDK bdev Controller", 00:21:21.754 "max_namespaces": 2, 00:21:21.754 "min_cntlid": 1, 00:21:21.754 "max_cntlid": 65519, 00:21:21.754 "namespaces": [ 00:21:21.754 { 00:21:21.754 "nsid": 1, 00:21:21.754 "bdev_name": "Malloc0", 00:21:21.754 "name": "Malloc0", 00:21:21.754 "nguid": "FB58A12A684F44E1BBD599F68EC56339", 00:21:21.754 "uuid": "fb58a12a-684f-44e1-bbd5-99f68ec56339" 00:21:21.754 } 00:21:21.754 ] 00:21:21.754 } 00:21:21.754 ] 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=588819 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:21.754 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:21:21.754 00:07:50 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.754 Malloc1 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.754 [ 00:21:21.754 { 00:21:21.754 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:21.754 "subtype": "Discovery", 00:21:21.754 "listen_addresses": [], 00:21:21.754 "allow_any_host": true, 00:21:21.754 "hosts": [] 00:21:21.754 }, 00:21:21.754 { 00:21:21.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.754 "subtype": "NVMe", 00:21:21.754 "listen_addresses": [ 00:21:21.754 { 00:21:21.754 "trtype": "RDMA", 00:21:21.754 "adrfam": "IPv4", 00:21:21.754 "traddr": "192.168.100.8", 00:21:21.754 "trsvcid": "4420" 00:21:21.754 } 00:21:21.754 ], 00:21:21.754 "allow_any_host": true, 00:21:21.754 "hosts": [], 00:21:21.754 "serial_number": "SPDK00000000000001", 00:21:21.754 "model_number": "SPDK bdev Controller", 00:21:21.754 "max_namespaces": 2, 00:21:21.754 "min_cntlid": 1, 00:21:21.754 "max_cntlid": 65519, 00:21:21.754 "namespaces": [ 00:21:21.754 { 00:21:21.754 "nsid": 1, 00:21:21.754 "bdev_name": "Malloc0", 00:21:21.754 "name": "Malloc0", 00:21:21.754 "nguid": "FB58A12A684F44E1BBD599F68EC56339", 00:21:21.754 "uuid": "fb58a12a-684f-44e1-bbd5-99f68ec56339" 00:21:21.754 }, 00:21:21.754 { 00:21:21.754 "nsid": 2, 00:21:21.754 "bdev_name": "Malloc1", 00:21:21.754 "name": "Malloc1", 00:21:21.754 "nguid": "FF856A65F02746FD9E2AA6DFD7EB17A9", 00:21:21.754 "uuid": "ff856a65-f027-46fd-9e2a-a6dfd7eb17a9" 00:21:21.754 } 00:21:21.754 ] 00:21:21.754 } 00:21:21.754 ] 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.754 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 588819 00:21:22.048 Asynchronous Event Request test 00:21:22.048 Attaching to 192.168.100.8 00:21:22.048 Attached to 192.168.100.8 00:21:22.048 Registering asynchronous event callbacks... 00:21:22.048 Starting namespace attribute notice tests for all controllers... 00:21:22.048 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:22.048 aer_cb - Changed Namespace 00:21:22.048 Cleaning up... 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:22.048 rmmod nvme_rdma 00:21:22.048 rmmod nvme_fabrics 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 588698 ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 588698 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 588698 ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 588698 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 588698 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 588698' 00:21:22.048 killing process with pid 588698 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@965 -- # kill 588698 00:21:22.048 [2024-05-15 00:07:51.274040] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:22.048 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@970 -- # wait 588698 00:21:22.048 [2024-05-15 00:07:51.358941] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:22.306 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.306 00:07:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:22.306 00:21:22.306 real 0m3.884s 00:21:22.306 user 0m5.264s 00:21:22.306 sys 0m2.113s 00:21:22.306 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:22.306 00:07:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.306 ************************************ 00:21:22.306 END TEST nvmf_aer 00:21:22.306 ************************************ 00:21:22.564 00:07:51 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:22.564 00:07:51 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:22.564 00:07:51 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:22.564 00:07:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:22.564 ************************************ 00:21:22.564 START TEST nvmf_async_init 00:21:22.564 ************************************ 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:22.564 * Looking for test storage... 00:21:22.564 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.564 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=190a8e05bde244528c75a73c7c7366d7 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.565 00:07:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.094 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:25.095 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:25.095 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:25.095 Found net devices under 0000:09:00.0: mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:25.095 Found net devices under 0000:09:00.1: mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:25.095 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:25.095 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:25.095 altname enp9s0f0np0 00:21:25.095 inet 192.168.100.8/24 scope global mlx_0_0 00:21:25.095 valid_lft forever preferred_lft forever 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:25.095 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:25.095 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:25.095 altname enp9s0f1np1 00:21:25.095 inet 192.168.100.9/24 scope global mlx_0_1 00:21:25.095 valid_lft forever preferred_lft forever 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:25.095 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:25.096 192.168.100.9' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:25.096 192.168.100.9' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:25.096 192.168.100.9' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=590910 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 590910 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 590910 ']' 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:25.096 00:07:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.096 [2024-05-15 00:07:54.321037] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:25.096 [2024-05-15 00:07:54.321120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.096 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.096 [2024-05-15 00:07:54.399587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.353 [2024-05-15 00:07:54.516688] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.353 [2024-05-15 00:07:54.516742] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.353 [2024-05-15 00:07:54.516759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.354 [2024-05-15 00:07:54.516773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.354 [2024-05-15 00:07:54.516785] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.354 [2024-05-15 00:07:54.516814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.918 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:25.918 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:21:25.918 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.918 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.918 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 [2024-05-15 00:07:55.306563] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19fa900/0x19fedf0) succeed. 00:21:26.176 [2024-05-15 00:07:55.318336] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19fbe00/0x1a40480) succeed. 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 null0 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 190a8e05bde244528c75a73c7c7366d7 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 [2024-05-15 00:07:55.403333] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:26.176 [2024-05-15 00:07:55.403701] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 nvme0n1 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 [ 00:21:26.176 { 00:21:26.176 "name": "nvme0n1", 00:21:26.176 "aliases": [ 00:21:26.176 "190a8e05-bde2-4452-8c75-a73c7c7366d7" 00:21:26.176 ], 00:21:26.176 "product_name": "NVMe disk", 00:21:26.176 "block_size": 512, 00:21:26.176 "num_blocks": 2097152, 00:21:26.176 "uuid": "190a8e05-bde2-4452-8c75-a73c7c7366d7", 00:21:26.176 "assigned_rate_limits": { 00:21:26.176 "rw_ios_per_sec": 0, 00:21:26.176 "rw_mbytes_per_sec": 0, 00:21:26.176 "r_mbytes_per_sec": 0, 00:21:26.176 "w_mbytes_per_sec": 0 00:21:26.176 }, 00:21:26.176 "claimed": false, 00:21:26.176 "zoned": false, 00:21:26.176 "supported_io_types": { 00:21:26.176 "read": true, 00:21:26.176 "write": true, 00:21:26.176 "unmap": false, 00:21:26.176 "write_zeroes": true, 00:21:26.176 "flush": true, 00:21:26.176 "reset": true, 00:21:26.176 "compare": true, 00:21:26.176 "compare_and_write": true, 00:21:26.176 "abort": true, 00:21:26.176 "nvme_admin": true, 00:21:26.176 "nvme_io": true 00:21:26.177 }, 00:21:26.177 "memory_domains": [ 00:21:26.177 { 00:21:26.177 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:26.177 "dma_device_type": 0 00:21:26.177 } 00:21:26.177 ], 00:21:26.177 "driver_specific": { 00:21:26.177 "nvme": [ 00:21:26.177 { 00:21:26.177 "trid": { 00:21:26.177 "trtype": "RDMA", 00:21:26.177 "adrfam": "IPv4", 00:21:26.177 "traddr": "192.168.100.8", 00:21:26.177 "trsvcid": "4420", 00:21:26.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:26.177 }, 00:21:26.177 "ctrlr_data": { 00:21:26.177 "cntlid": 1, 00:21:26.177 "vendor_id": "0x8086", 00:21:26.177 "model_number": "SPDK bdev Controller", 00:21:26.177 "serial_number": "00000000000000000000", 00:21:26.177 "firmware_revision": "24.05", 00:21:26.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.177 "oacs": { 00:21:26.177 "security": 0, 00:21:26.177 "format": 0, 00:21:26.177 "firmware": 0, 00:21:26.177 "ns_manage": 0 00:21:26.177 }, 00:21:26.177 "multi_ctrlr": true, 00:21:26.177 "ana_reporting": false 00:21:26.177 }, 00:21:26.177 "vs": { 00:21:26.177 "nvme_version": "1.3" 00:21:26.177 }, 00:21:26.177 "ns_data": { 00:21:26.177 "id": 1, 00:21:26.177 "can_share": true 00:21:26.177 } 00:21:26.177 } 00:21:26.177 ], 00:21:26.177 "mp_policy": "active_passive" 00:21:26.177 } 00:21:26.177 } 00:21:26.177 ] 00:21:26.177 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.177 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:26.177 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.177 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.177 [2024-05-15 00:07:55.518038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:26.434 [2024-05-15 00:07:55.543963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:26.434 [2024-05-15 00:07:55.569311] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.434 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.434 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:26.434 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.434 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.434 [ 00:21:26.434 { 00:21:26.434 "name": "nvme0n1", 00:21:26.434 "aliases": [ 00:21:26.434 "190a8e05-bde2-4452-8c75-a73c7c7366d7" 00:21:26.434 ], 00:21:26.434 "product_name": "NVMe disk", 00:21:26.434 "block_size": 512, 00:21:26.434 "num_blocks": 2097152, 00:21:26.434 "uuid": "190a8e05-bde2-4452-8c75-a73c7c7366d7", 00:21:26.434 "assigned_rate_limits": { 00:21:26.434 "rw_ios_per_sec": 0, 00:21:26.434 "rw_mbytes_per_sec": 0, 00:21:26.435 "r_mbytes_per_sec": 0, 00:21:26.435 "w_mbytes_per_sec": 0 00:21:26.435 }, 00:21:26.435 "claimed": false, 00:21:26.435 "zoned": false, 00:21:26.435 "supported_io_types": { 00:21:26.435 "read": true, 00:21:26.435 "write": true, 00:21:26.435 "unmap": false, 00:21:26.435 "write_zeroes": true, 00:21:26.435 "flush": true, 00:21:26.435 "reset": true, 00:21:26.435 "compare": true, 00:21:26.435 "compare_and_write": true, 00:21:26.435 "abort": true, 00:21:26.435 "nvme_admin": true, 00:21:26.435 "nvme_io": true 00:21:26.435 }, 00:21:26.435 "memory_domains": [ 00:21:26.435 { 00:21:26.435 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:26.435 "dma_device_type": 0 00:21:26.435 } 00:21:26.435 ], 00:21:26.435 "driver_specific": { 00:21:26.435 "nvme": [ 00:21:26.435 { 00:21:26.435 "trid": { 00:21:26.435 "trtype": "RDMA", 00:21:26.435 "adrfam": "IPv4", 00:21:26.435 "traddr": "192.168.100.8", 00:21:26.435 "trsvcid": "4420", 00:21:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:26.435 }, 00:21:26.435 "ctrlr_data": { 00:21:26.435 "cntlid": 2, 00:21:26.435 "vendor_id": "0x8086", 00:21:26.435 "model_number": "SPDK bdev Controller", 00:21:26.435 "serial_number": "00000000000000000000", 00:21:26.435 "firmware_revision": "24.05", 00:21:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.435 "oacs": { 00:21:26.435 "security": 0, 00:21:26.435 "format": 0, 00:21:26.435 "firmware": 0, 00:21:26.435 "ns_manage": 0 00:21:26.435 }, 00:21:26.435 "multi_ctrlr": true, 00:21:26.435 "ana_reporting": false 00:21:26.435 }, 00:21:26.435 "vs": { 00:21:26.435 "nvme_version": "1.3" 00:21:26.435 }, 00:21:26.435 "ns_data": { 00:21:26.435 "id": 1, 00:21:26.435 "can_share": true 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ], 00:21:26.435 "mp_policy": "active_passive" 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0cj5jPDmEW 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0cj5jPDmEW 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 [2024-05-15 00:07:55.627999] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0cj5jPDmEW 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0cj5jPDmEW 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 [2024-05-15 00:07:55.644012] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.435 nvme0n1 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 [ 00:21:26.435 { 00:21:26.435 "name": "nvme0n1", 00:21:26.435 "aliases": [ 00:21:26.435 "190a8e05-bde2-4452-8c75-a73c7c7366d7" 00:21:26.435 ], 00:21:26.435 "product_name": "NVMe disk", 00:21:26.435 "block_size": 512, 00:21:26.435 "num_blocks": 2097152, 00:21:26.435 "uuid": "190a8e05-bde2-4452-8c75-a73c7c7366d7", 00:21:26.435 "assigned_rate_limits": { 00:21:26.435 "rw_ios_per_sec": 0, 00:21:26.435 "rw_mbytes_per_sec": 0, 00:21:26.435 "r_mbytes_per_sec": 0, 00:21:26.435 "w_mbytes_per_sec": 0 00:21:26.435 }, 00:21:26.435 "claimed": false, 00:21:26.435 "zoned": false, 00:21:26.435 "supported_io_types": { 00:21:26.435 "read": true, 00:21:26.435 "write": true, 00:21:26.435 "unmap": false, 00:21:26.435 "write_zeroes": true, 00:21:26.435 "flush": true, 00:21:26.435 "reset": true, 00:21:26.435 "compare": true, 00:21:26.435 "compare_and_write": true, 00:21:26.435 "abort": true, 00:21:26.435 "nvme_admin": true, 00:21:26.435 "nvme_io": true 00:21:26.435 }, 00:21:26.435 "memory_domains": [ 00:21:26.435 { 00:21:26.435 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:26.435 "dma_device_type": 0 00:21:26.435 } 00:21:26.435 ], 00:21:26.435 "driver_specific": { 00:21:26.435 "nvme": [ 00:21:26.435 { 00:21:26.435 "trid": { 00:21:26.435 "trtype": "RDMA", 00:21:26.435 "adrfam": "IPv4", 00:21:26.435 "traddr": "192.168.100.8", 00:21:26.435 "trsvcid": "4421", 00:21:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:26.435 }, 00:21:26.435 "ctrlr_data": { 00:21:26.435 "cntlid": 3, 00:21:26.435 "vendor_id": "0x8086", 00:21:26.435 "model_number": "SPDK bdev Controller", 00:21:26.435 "serial_number": "00000000000000000000", 00:21:26.435 "firmware_revision": "24.05", 00:21:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.435 "oacs": { 00:21:26.435 "security": 0, 00:21:26.435 "format": 0, 00:21:26.435 "firmware": 0, 00:21:26.435 "ns_manage": 0 00:21:26.435 }, 00:21:26.435 "multi_ctrlr": true, 00:21:26.435 "ana_reporting": false 00:21:26.435 }, 00:21:26.435 "vs": { 00:21:26.435 "nvme_version": "1.3" 00:21:26.435 }, 00:21:26.435 "ns_data": { 00:21:26.435 "id": 1, 00:21:26.435 "can_share": true 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ], 00:21:26.435 "mp_policy": "active_passive" 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0cj5jPDmEW 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.435 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:26.435 rmmod nvme_rdma 00:21:26.435 rmmod nvme_fabrics 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 590910 ']' 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 590910 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 590910 ']' 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 590910 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 590910 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 590910' 00:21:26.693 killing process with pid 590910 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 590910 00:21:26.693 [2024-05-15 00:07:55.823209] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:26.693 00:07:55 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 590910 00:21:26.693 [2024-05-15 00:07:55.871514] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:26.951 00:07:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.951 00:07:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:26.951 00:21:26.951 real 0m4.433s 00:21:26.951 user 0m2.945s 00:21:26.951 sys 0m2.113s 00:21:26.951 00:07:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:26.951 00:07:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.951 ************************************ 00:21:26.951 END TEST nvmf_async_init 00:21:26.951 ************************************ 00:21:26.951 00:07:56 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:26.951 00:07:56 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:26.951 00:07:56 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:26.951 00:07:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:26.951 ************************************ 00:21:26.951 START TEST dma 00:21:26.951 ************************************ 00:21:26.951 00:07:56 nvmf_rdma.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:26.951 * Looking for test storage... 00:21:26.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:26.951 00:07:56 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.951 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.951 00:07:56 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.951 00:07:56 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.951 00:07:56 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.951 00:07:56 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.951 00:07:56 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.952 00:07:56 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.952 00:07:56 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:21:26.952 00:07:56 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.952 00:07:56 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:26.952 00:07:56 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:26.952 00:07:56 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:26.952 00:07:56 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:21:26.952 00:07:56 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.952 00:07:56 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.952 00:07:56 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.952 00:07:56 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.952 00:07:56 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:29.480 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:29.480 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:29.480 Found net devices under 0000:09:00.0: mlx_0_0 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:29.480 Found net devices under 0000:09:00.1: mlx_0_1 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:29.480 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:29.481 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.481 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:29.481 altname enp9s0f0np0 00:21:29.481 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.481 valid_lft forever preferred_lft forever 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:29.481 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.481 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:29.481 altname enp9s0f1np1 00:21:29.481 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.481 valid_lft forever preferred_lft forever 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.481 192.168.100.9' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:29.481 192.168.100.9' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:29.481 192.168.100.9' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:29.481 00:07:58 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=593139 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:29.481 00:07:58 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 593139 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@827 -- # '[' -z 593139 ']' 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.481 00:07:58 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:29.739 [2024-05-15 00:07:58.837611] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:29.739 [2024-05-15 00:07:58.837701] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.739 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.739 [2024-05-15 00:07:58.919183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.739 [2024-05-15 00:07:59.041135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.739 [2024-05-15 00:07:59.041198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.739 [2024-05-15 00:07:59.041214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.739 [2024-05-15 00:07:59.041227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.739 [2024-05-15 00:07:59.041239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.739 [2024-05-15 00:07:59.041308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.739 [2024-05-15 00:07:59.041314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@860 -- # return 0 00:21:29.998 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:29.998 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.998 00:07:59 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:29.998 [2024-05-15 00:07:59.221687] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfc23d0/0xfc68c0) succeed. 00:21:29.998 [2024-05-15 00:07:59.233261] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc38d0/0x1007f50) succeed. 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.998 00:07:59 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.998 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:30.257 Malloc0 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.257 00:07:59 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.257 00:07:59 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.257 00:07:59 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:30.257 [2024-05-15 00:07:59.419852] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:30.257 [2024-05-15 00:07:59.420202] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:30.257 00:07:59 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.257 00:07:59 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:21:30.257 00:07:59 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.257 { 00:21:30.257 "params": { 00:21:30.257 "name": "Nvme$subsystem", 00:21:30.257 "trtype": "$TEST_TRANSPORT", 00:21:30.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.257 "adrfam": "ipv4", 00:21:30.257 "trsvcid": "$NVMF_PORT", 00:21:30.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.257 "hdgst": ${hdgst:-false}, 00:21:30.257 "ddgst": ${ddgst:-false} 00:21:30.257 }, 00:21:30.257 "method": "bdev_nvme_attach_controller" 00:21:30.257 } 00:21:30.257 EOF 00:21:30.257 )") 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:21:30.257 00:07:59 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:30.257 "params": { 00:21:30.257 "name": "Nvme0", 00:21:30.257 "trtype": "rdma", 00:21:30.257 "traddr": "192.168.100.8", 00:21:30.257 "adrfam": "ipv4", 00:21:30.257 "trsvcid": "4420", 00:21:30.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:30.257 "hdgst": false, 00:21:30.257 "ddgst": false 00:21:30.257 }, 00:21:30.257 "method": "bdev_nvme_attach_controller" 00:21:30.257 }' 00:21:30.257 [2024-05-15 00:07:59.463256] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:30.257 [2024-05-15 00:07:59.463339] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593286 ] 00:21:30.257 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.257 [2024-05-15 00:07:59.532549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.514 [2024-05-15 00:07:59.641482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.514 [2024-05-15 00:07:59.641486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.776 bdev Nvme0n1 reports 1 memory domains 00:21:35.776 bdev Nvme0n1 supports RDMA memory domain 00:21:35.776 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:35.776 ========================================================================== 00:21:35.776 Latency [us] 00:21:35.776 IOPS MiB/s Average min max 00:21:35.776 Core 2: 17993.65 70.29 888.36 360.31 7245.11 00:21:35.776 Core 3: 18288.99 71.44 873.91 307.48 6897.93 00:21:35.776 ========================================================================== 00:21:35.776 Total : 36282.64 141.73 881.08 307.48 7245.11 00:21:35.776 00:21:35.776 Total operations: 181447, translate 181447 pull_push 0 memzero 0 00:21:36.033 00:08:05 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:21:36.033 00:08:05 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:21:36.033 00:08:05 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:21:36.033 [2024-05-15 00:08:05.159481] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:36.033 [2024-05-15 00:08:05.159564] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593947 ] 00:21:36.033 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.033 [2024-05-15 00:08:05.229589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:36.033 [2024-05-15 00:08:05.335811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.033 [2024-05-15 00:08:05.335814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.588 bdev Malloc0 reports 2 memory domains 00:21:42.588 bdev Malloc0 doesn't support RDMA memory domain 00:21:42.588 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:42.588 ========================================================================== 00:21:42.588 Latency [us] 00:21:42.588 IOPS MiB/s Average min max 00:21:42.588 Core 2: 12221.07 47.74 1308.22 576.53 1878.44 00:21:42.588 Core 3: 12306.43 48.07 1299.14 571.65 2479.60 00:21:42.588 ========================================================================== 00:21:42.588 Total : 24527.50 95.81 1303.67 571.65 2479.60 00:21:42.588 00:21:42.588 Total operations: 122693, translate 0 pull_push 490772 memzero 0 00:21:42.588 00:08:10 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:21:42.588 00:08:10 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:21:42.588 00:08:10 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:42.588 00:08:10 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:21:42.588 Ignoring -M option 00:21:42.588 [2024-05-15 00:08:10.787168] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:42.588 [2024-05-15 00:08:10.787284] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594590 ] 00:21:42.588 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.588 [2024-05-15 00:08:10.857690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:42.588 [2024-05-15 00:08:10.966458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.588 [2024-05-15 00:08:10.966463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.893 bdev 80d28631-6294-485c-8d98-6dc122ce99b9 reports 1 memory domains 00:21:47.893 bdev 80d28631-6294-485c-8d98-6dc122ce99b9 supports RDMA memory domain 00:21:47.893 Initialization complete, running randread IO for 5 sec on 2 cores 00:21:47.893 ========================================================================== 00:21:47.893 Latency [us] 00:21:47.893 IOPS MiB/s Average min max 00:21:47.893 Core 2: 64425.41 251.66 247.39 79.48 3948.01 00:21:47.893 Core 3: 66636.72 260.30 239.16 108.22 3933.21 00:21:47.893 ========================================================================== 00:21:47.893 Total : 131062.13 511.96 243.21 79.48 3948.01 00:21:47.893 00:21:47.893 Total operations: 655394, translate 0 pull_push 0 memzero 655394 00:21:47.893 00:08:16 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:21:47.893 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.893 [2024-05-15 00:08:16.646638] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:49.798 Initializing NVMe Controllers 00:21:49.798 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:21:49.798 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:49.798 Initialization complete. Launching workers. 00:21:49.798 ======================================================== 00:21:49.798 Latency(us) 00:21:49.798 Device Information : IOPS MiB/s Average min max 00:21:49.798 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7979.70 7925.23 8033.41 00:21:49.798 ======================================================== 00:21:49.798 Total : 2016.00 7.87 7979.70 7925.23 8033.41 00:21:49.798 00:21:49.798 00:08:18 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:21:49.798 00:08:18 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:21:49.798 00:08:18 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:49.798 00:08:18 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:21:49.798 [2024-05-15 00:08:19.004052] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:49.798 [2024-05-15 00:08:19.004129] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595533 ] 00:21:49.798 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.798 [2024-05-15 00:08:19.073271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:50.056 [2024-05-15 00:08:19.185178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.056 [2024-05-15 00:08:19.185182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.613 bdev 48a03ba7-d157-4866-9e7b-690c614e9aca reports 1 memory domains 00:21:56.613 bdev 48a03ba7-d157-4866-9e7b-690c614e9aca supports RDMA memory domain 00:21:56.613 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:56.613 ========================================================================== 00:21:56.613 Latency [us] 00:21:56.613 IOPS MiB/s Average min max 00:21:56.613 Core 2: 15460.06 60.39 1034.07 69.98 12167.55 00:21:56.613 Core 3: 15774.99 61.62 1013.35 22.90 12075.77 00:21:56.613 ========================================================================== 00:21:56.613 Total : 31235.05 122.01 1023.61 22.90 12167.55 00:21:56.613 00:21:56.613 Total operations: 156213, translate 156111 pull_push 0 memzero 102 00:21:56.613 00:08:24 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:21:56.613 00:08:24 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:56.613 rmmod nvme_rdma 00:21:56.613 rmmod nvme_fabrics 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 593139 ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 593139 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@946 -- # '[' -z 593139 ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@950 -- # kill -0 593139 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # uname 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 593139 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 593139' 00:21:56.613 killing process with pid 593139 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@965 -- # kill 593139 00:21:56.613 [2024-05-15 00:08:24.805252] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:56.613 00:08:24 nvmf_rdma.dma -- common/autotest_common.sh@970 -- # wait 593139 00:21:56.613 [2024-05-15 00:08:24.866849] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:56.613 00:08:25 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.613 00:08:25 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:56.613 00:21:56.613 real 0m29.034s 00:21:56.613 user 1m36.416s 00:21:56.613 sys 0m3.064s 00:21:56.613 00:08:25 nvmf_rdma.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:56.613 00:08:25 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:56.613 ************************************ 00:21:56.613 END TEST dma 00:21:56.613 ************************************ 00:21:56.613 00:08:25 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:56.613 00:08:25 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:56.613 00:08:25 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:56.613 00:08:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:56.613 ************************************ 00:21:56.613 START TEST nvmf_identify 00:21:56.613 ************************************ 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:56.613 * Looking for test storage... 00:21:56.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.613 00:08:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:58.514 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:58.514 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:58.514 Found net devices under 0000:09:00.0: mlx_0_0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:58.514 Found net devices under 0000:09:00.1: mlx_0_1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:58.514 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.514 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:21:58.514 altname enp9s0f0np0 00:21:58.514 inet 192.168.100.8/24 scope global mlx_0_0 00:21:58.514 valid_lft forever preferred_lft forever 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:58.514 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.514 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:21:58.514 altname enp9s0f1np1 00:21:58.514 inet 192.168.100.9/24 scope global mlx_0_1 00:21:58.514 valid_lft forever preferred_lft forever 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:58.514 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.515 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:58.773 192.168.100.9' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:58.773 192.168.100.9' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:58.773 192.168.100.9' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=598247 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 598247 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 598247 ']' 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:58.773 00:08:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.773 [2024-05-15 00:08:27.935524] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:58.773 [2024-05-15 00:08:27.935600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.773 [2024-05-15 00:08:28.012762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.031 [2024-05-15 00:08:28.136450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.031 [2024-05-15 00:08:28.136503] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.031 [2024-05-15 00:08:28.136518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.031 [2024-05-15 00:08:28.136537] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.031 [2024-05-15 00:08:28.136549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.031 [2024-05-15 00:08:28.136609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.031 [2024-05-15 00:08:28.136679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.031 [2024-05-15 00:08:28.136776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.031 [2024-05-15 00:08:28.136779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.031 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.031 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:21:59.031 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:59.031 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.031 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.031 [2024-05-15 00:08:28.288265] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f54a20/0x1f58f10) succeed. 00:21:59.031 [2024-05-15 00:08:28.299211] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f56060/0x1f9a5a0) succeed. 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 Malloc0 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 [2024-05-15 00:08:28.507145] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:59.290 [2024-05-15 00:08:28.507477] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.290 [ 00:21:59.290 { 00:21:59.290 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:59.290 "subtype": "Discovery", 00:21:59.290 "listen_addresses": [ 00:21:59.290 { 00:21:59.290 "trtype": "RDMA", 00:21:59.290 "adrfam": "IPv4", 00:21:59.290 "traddr": "192.168.100.8", 00:21:59.290 "trsvcid": "4420" 00:21:59.290 } 00:21:59.290 ], 00:21:59.290 "allow_any_host": true, 00:21:59.290 "hosts": [] 00:21:59.290 }, 00:21:59.290 { 00:21:59.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.290 "subtype": "NVMe", 00:21:59.290 "listen_addresses": [ 00:21:59.290 { 00:21:59.290 "trtype": "RDMA", 00:21:59.290 "adrfam": "IPv4", 00:21:59.290 "traddr": "192.168.100.8", 00:21:59.290 "trsvcid": "4420" 00:21:59.290 } 00:21:59.290 ], 00:21:59.290 "allow_any_host": true, 00:21:59.290 "hosts": [], 00:21:59.290 "serial_number": "SPDK00000000000001", 00:21:59.290 "model_number": "SPDK bdev Controller", 00:21:59.290 "max_namespaces": 32, 00:21:59.290 "min_cntlid": 1, 00:21:59.290 "max_cntlid": 65519, 00:21:59.290 "namespaces": [ 00:21:59.290 { 00:21:59.290 "nsid": 1, 00:21:59.290 "bdev_name": "Malloc0", 00:21:59.290 "name": "Malloc0", 00:21:59.290 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:59.290 "eui64": "ABCDEF0123456789", 00:21:59.290 "uuid": "7acefd28-87f0-4279-8a43-20c5e3d6c972" 00:21:59.290 } 00:21:59.290 ] 00:21:59.290 } 00:21:59.290 ] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.290 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:59.290 [2024-05-15 00:08:28.547204] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:59.290 [2024-05-15 00:08:28.547254] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598298 ] 00:21:59.290 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.290 [2024-05-15 00:08:28.595287] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:59.290 [2024-05-15 00:08:28.595370] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:59.290 [2024-05-15 00:08:28.595395] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:59.290 [2024-05-15 00:08:28.595403] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:59.290 [2024-05-15 00:08:28.595440] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:59.290 [2024-05-15 00:08:28.613476] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:59.290 [2024-05-15 00:08:28.629733] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:59.290 [2024-05-15 00:08:28.629750] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:59.290 [2024-05-15 00:08:28.629760] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629770] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629778] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629786] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629793] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629801] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629817] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629825] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629832] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629846] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629854] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629862] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629870] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629878] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629885] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629893] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629901] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.629925] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.633954] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.633968] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.633977] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.633985] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.633994] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.634002] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.634011] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x188600 00:21:59.290 [2024-05-15 00:08:28.634019] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634027] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634036] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634045] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634053] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634061] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:59.291 [2024-05-15 00:08:28.634070] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:59.291 [2024-05-15 00:08:28.634076] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:59.291 [2024-05-15 00:08:28.634109] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.291 [2024-05-15 00:08:28.634132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x188600 00:21:59.554 [2024-05-15 00:08:28.641940] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.641963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.641976] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.641988] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.554 [2024-05-15 00:08:28.641999] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642008] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642033] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642079] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642107] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642117] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642128] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642159] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642177] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642185] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642195] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642206] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642251] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642276] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642288] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642320] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642337] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:59.554 [2024-05-15 00:08:28.642345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642352] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642474] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:59.554 [2024-05-15 00:08:28.642482] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642495] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642525] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.554 [2024-05-15 00:08:28.642550] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642562] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.554 [2024-05-15 00:08:28.642591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642607] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.554 [2024-05-15 00:08:28.642615] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:59.554 [2024-05-15 00:08:28.642622] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642631] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:59.554 [2024-05-15 00:08:28.642649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.554 [2024-05-15 00:08:28.642665] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.554 [2024-05-15 00:08:28.642677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:21:59.554 [2024-05-15 00:08:28.642723] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.554 [2024-05-15 00:08:28.642732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:59.554 [2024-05-15 00:08:28.642744] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:59.554 [2024-05-15 00:08:28.642752] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:59.554 [2024-05-15 00:08:28.642759] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:59.554 [2024-05-15 00:08:28.642766] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:59.554 [2024-05-15 00:08:28.642773] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:59.554 [2024-05-15 00:08:28.642784] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:59.554 [2024-05-15 00:08:28.642792] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642803] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.555 [2024-05-15 00:08:28.642818] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.555 [2024-05-15 00:08:28.642857] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.642866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.642877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.555 [2024-05-15 00:08:28.642897] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.555 [2024-05-15 00:08:28.642940] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.555 [2024-05-15 00:08:28.642962] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.642971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.555 [2024-05-15 00:08:28.642979] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.555 [2024-05-15 00:08:28.642987] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.555 [2024-05-15 00:08:28.643015] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.555 [2024-05-15 00:08:28.643047] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643066] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:59.555 [2024-05-15 00:08:28.643074] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:59.555 [2024-05-15 00:08:28.643081] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643096] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:21:59.555 [2024-05-15 00:08:28.643148] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643168] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643182] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:59.555 [2024-05-15 00:08:28.643230] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x188600 00:21:59.555 [2024-05-15 00:08:28.643256] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.555 [2024-05-15 00:08:28.643294] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643322] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x188600 00:21:59.555 [2024-05-15 00:08:28.643342] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643367] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643375] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x188600 00:21:59.555 [2024-05-15 00:08:28.643416] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.555 [2024-05-15 00:08:28.643441] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.555 [2024-05-15 00:08:28.643450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:59.555 [2024-05-15 00:08:28.643465] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.555 ===================================================== 00:21:59.555 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:59.555 ===================================================== 00:21:59.555 Controller Capabilities/Features 00:21:59.555 ================================ 00:21:59.555 Vendor ID: 0000 00:21:59.555 Subsystem Vendor ID: 0000 00:21:59.555 Serial Number: .................... 00:21:59.555 Model Number: ........................................ 00:21:59.555 Firmware Version: 24.05 00:21:59.555 Recommended Arb Burst: 0 00:21:59.555 IEEE OUI Identifier: 00 00 00 00:21:59.555 Multi-path I/O 00:21:59.555 May have multiple subsystem ports: No 00:21:59.555 May have multiple controllers: No 00:21:59.555 Associated with SR-IOV VF: No 00:21:59.555 Max Data Transfer Size: 131072 00:21:59.555 Max Number of Namespaces: 0 00:21:59.555 Max Number of I/O Queues: 1024 00:21:59.555 NVMe Specification Version (VS): 1.3 00:21:59.555 NVMe Specification Version (Identify): 1.3 00:21:59.555 Maximum Queue Entries: 128 00:21:59.555 Contiguous Queues Required: Yes 00:21:59.555 Arbitration Mechanisms Supported 00:21:59.555 Weighted Round Robin: Not Supported 00:21:59.555 Vendor Specific: Not Supported 00:21:59.555 Reset Timeout: 15000 ms 00:21:59.555 Doorbell Stride: 4 bytes 00:21:59.555 NVM Subsystem Reset: Not Supported 00:21:59.555 Command Sets Supported 00:21:59.555 NVM Command Set: Supported 00:21:59.555 Boot Partition: Not Supported 00:21:59.555 Memory Page Size Minimum: 4096 bytes 00:21:59.555 Memory Page Size Maximum: 4096 bytes 00:21:59.555 Persistent Memory Region: Not Supported 00:21:59.555 Optional Asynchronous Events Supported 00:21:59.555 Namespace Attribute Notices: Not Supported 00:21:59.555 Firmware Activation Notices: Not Supported 00:21:59.555 ANA Change Notices: Not Supported 00:21:59.555 PLE Aggregate Log Change Notices: Not Supported 00:21:59.555 LBA Status Info Alert Notices: Not Supported 00:21:59.555 EGE Aggregate Log Change Notices: Not Supported 00:21:59.555 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.555 Zone Descriptor Change Notices: Not Supported 00:21:59.555 Discovery Log Change Notices: Supported 00:21:59.555 Controller Attributes 00:21:59.555 128-bit Host Identifier: Not Supported 00:21:59.555 Non-Operational Permissive Mode: Not Supported 00:21:59.555 NVM Sets: Not Supported 00:21:59.555 Read Recovery Levels: Not Supported 00:21:59.555 Endurance Groups: Not Supported 00:21:59.555 Predictable Latency Mode: Not Supported 00:21:59.555 Traffic Based Keep ALive: Not Supported 00:21:59.555 Namespace Granularity: Not Supported 00:21:59.555 SQ Associations: Not Supported 00:21:59.555 UUID List: Not Supported 00:21:59.555 Multi-Domain Subsystem: Not Supported 00:21:59.555 Fixed Capacity Management: Not Supported 00:21:59.555 Variable Capacity Management: Not Supported 00:21:59.555 Delete Endurance Group: Not Supported 00:21:59.555 Delete NVM Set: Not Supported 00:21:59.555 Extended LBA Formats Supported: Not Supported 00:21:59.555 Flexible Data Placement Supported: Not Supported 00:21:59.555 00:21:59.555 Controller Memory Buffer Support 00:21:59.555 ================================ 00:21:59.555 Supported: No 00:21:59.555 00:21:59.555 Persistent Memory Region Support 00:21:59.555 ================================ 00:21:59.555 Supported: No 00:21:59.555 00:21:59.555 Admin Command Set Attributes 00:21:59.555 ============================ 00:21:59.555 Security Send/Receive: Not Supported 00:21:59.555 Format NVM: Not Supported 00:21:59.555 Firmware Activate/Download: Not Supported 00:21:59.555 Namespace Management: Not Supported 00:21:59.555 Device Self-Test: Not Supported 00:21:59.555 Directives: Not Supported 00:21:59.555 NVMe-MI: Not Supported 00:21:59.555 Virtualization Management: Not Supported 00:21:59.556 Doorbell Buffer Config: Not Supported 00:21:59.556 Get LBA Status Capability: Not Supported 00:21:59.556 Command & Feature Lockdown Capability: Not Supported 00:21:59.556 Abort Command Limit: 1 00:21:59.556 Async Event Request Limit: 4 00:21:59.556 Number of Firmware Slots: N/A 00:21:59.556 Firmware Slot 1 Read-Only: N/A 00:21:59.556 Firmware Activation Without Reset: N/A 00:21:59.556 Multiple Update Detection Support: N/A 00:21:59.556 Firmware Update Granularity: No Information Provided 00:21:59.556 Per-Namespace SMART Log: No 00:21:59.556 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.556 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:59.556 Command Effects Log Page: Not Supported 00:21:59.556 Get Log Page Extended Data: Supported 00:21:59.556 Telemetry Log Pages: Not Supported 00:21:59.556 Persistent Event Log Pages: Not Supported 00:21:59.556 Supported Log Pages Log Page: May Support 00:21:59.556 Commands Supported & Effects Log Page: Not Supported 00:21:59.556 Feature Identifiers & Effects Log Page:May Support 00:21:59.556 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.556 Data Area 4 for Telemetry Log: Not Supported 00:21:59.556 Error Log Page Entries Supported: 128 00:21:59.556 Keep Alive: Not Supported 00:21:59.556 00:21:59.556 NVM Command Set Attributes 00:21:59.556 ========================== 00:21:59.556 Submission Queue Entry Size 00:21:59.556 Max: 1 00:21:59.556 Min: 1 00:21:59.556 Completion Queue Entry Size 00:21:59.556 Max: 1 00:21:59.556 Min: 1 00:21:59.556 Number of Namespaces: 0 00:21:59.556 Compare Command: Not Supported 00:21:59.556 Write Uncorrectable Command: Not Supported 00:21:59.556 Dataset Management Command: Not Supported 00:21:59.556 Write Zeroes Command: Not Supported 00:21:59.556 Set Features Save Field: Not Supported 00:21:59.556 Reservations: Not Supported 00:21:59.556 Timestamp: Not Supported 00:21:59.556 Copy: Not Supported 00:21:59.556 Volatile Write Cache: Not Present 00:21:59.556 Atomic Write Unit (Normal): 1 00:21:59.556 Atomic Write Unit (PFail): 1 00:21:59.556 Atomic Compare & Write Unit: 1 00:21:59.556 Fused Compare & Write: Supported 00:21:59.556 Scatter-Gather List 00:21:59.556 SGL Command Set: Supported 00:21:59.556 SGL Keyed: Supported 00:21:59.556 SGL Bit Bucket Descriptor: Not Supported 00:21:59.556 SGL Metadata Pointer: Not Supported 00:21:59.556 Oversized SGL: Not Supported 00:21:59.556 SGL Metadata Address: Not Supported 00:21:59.556 SGL Offset: Supported 00:21:59.556 Transport SGL Data Block: Not Supported 00:21:59.556 Replay Protected Memory Block: Not Supported 00:21:59.556 00:21:59.556 Firmware Slot Information 00:21:59.556 ========================= 00:21:59.556 Active slot: 0 00:21:59.556 00:21:59.556 00:21:59.556 Error Log 00:21:59.556 ========= 00:21:59.556 00:21:59.556 Active Namespaces 00:21:59.556 ================= 00:21:59.556 Discovery Log Page 00:21:59.556 ================== 00:21:59.556 Generation Counter: 2 00:21:59.556 Number of Records: 2 00:21:59.556 Record Format: 0 00:21:59.556 00:21:59.556 Discovery Log Entry 0 00:21:59.556 ---------------------- 00:21:59.556 Transport Type: 1 (RDMA) 00:21:59.556 Address Family: 1 (IPv4) 00:21:59.556 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:59.556 Entry Flags: 00:21:59.556 Duplicate Returned Information: 1 00:21:59.556 Explicit Persistent Connection Support for Discovery: 1 00:21:59.556 Transport Requirements: 00:21:59.556 Secure Channel: Not Required 00:21:59.556 Port ID: 0 (0x0000) 00:21:59.556 Controller ID: 65535 (0xffff) 00:21:59.556 Admin Max SQ Size: 128 00:21:59.556 Transport Service Identifier: 4420 00:21:59.556 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:59.556 Transport Address: 192.168.100.8 00:21:59.556 Transport Specific Address Subtype - RDMA 00:21:59.556 RDMA QP Service Type: 1 (Reliable Connected) 00:21:59.556 RDMA Provider Type: 1 (No provider specified) 00:21:59.556 RDMA CM Service: 1 (RDMA_CM) 00:21:59.556 Discovery Log Entry 1 00:21:59.556 ---------------------- 00:21:59.556 Transport Type: 1 (RDMA) 00:21:59.556 Address Family: 1 (IPv4) 00:21:59.556 Subsystem Type: 2 (NVM Subsystem) 00:21:59.556 Entry Flags: 00:21:59.556 Duplicate Returned Information: 0 00:21:59.556 Explicit Persistent Connection Support for Discovery: 0 00:21:59.556 Transport Requirements: 00:21:59.556 Secure Channel: Not Required 00:21:59.556 Port ID: 0 (0x0000) 00:21:59.556 Controller ID: 65535 (0xffff) 00:21:59.556 Admin Max SQ Size: [2024-05-15 00:08:28.643561] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:59.556 [2024-05-15 00:08:28.643579] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21871 doesn't match qid 00:21:59.556 [2024-05-15 00:08:28.643598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:aef0 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643611] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21871 doesn't match qid 00:21:59.556 [2024-05-15 00:08:28.643627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:aef0 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643636] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21871 doesn't match qid 00:21:59.556 [2024-05-15 00:08:28.643647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:aef0 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643656] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21871 doesn't match qid 00:21:59.556 [2024-05-15 00:08:28.643666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32665 cdw0:5 sqhd:aef0 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643683] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.643721] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.643731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643757] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.643778] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643809] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.643821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643831] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:59.556 [2024-05-15 00:08:28.643840] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:59.556 [2024-05-15 00:08:28.643848] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643861] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.643936] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.643947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.643957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643971] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.643984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.644016] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.644025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.644034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644047] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.644084] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.644094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.644103] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644117] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.556 [2024-05-15 00:08:28.644154] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.556 [2024-05-15 00:08:28.644163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:59.556 [2024-05-15 00:08:28.644172] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644185] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.556 [2024-05-15 00:08:28.644197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644243] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644261] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644274] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644326] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644343] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644355] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644392] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644409] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644421] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644451] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644468] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644480] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644508] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644538] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644566] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644583] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644595] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644625] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644642] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644654] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644680] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644697] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644709] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644738] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644754] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644767] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644799] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644828] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644858] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644875] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644888] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.644943] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.644954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.644963] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644977] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.644989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645008] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645026] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645039] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645071] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645089] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645102] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645133] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645150] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645163] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645192] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645209] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645238] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645276] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645308] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645321] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645352] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:59.557 [2024-05-15 00:08:28.645368] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645380] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.557 [2024-05-15 00:08:28.645391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.557 [2024-05-15 00:08:28.645409] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.557 [2024-05-15 00:08:28.645417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645425] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645438] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645465] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645481] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645493] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645524] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645541] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645553] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645581] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645598] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645611] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645647] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645664] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645677] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645726] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645738] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645769] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645797] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645826] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645843] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645855] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.645881] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.645890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.645898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.645925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.649967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.558 [2024-05-15 00:08:28.649989] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.558 [2024-05-15 00:08:28.649999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0011 p:0 m:0 dnr:0 00:21:59.558 [2024-05-15 00:08:28.650008] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.650022] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:59.558 128 00:21:59.558 Transport Service Identifier: 4420 00:21:59.558 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:59.558 Transport Address: 192.168.100.8 00:21:59.558 Transport Specific Address Subtype - RDMA 00:21:59.558 RDMA QP Service Type: 1 (Reliable Connected) 00:21:59.558 RDMA Provider Type: 1 (No provider specified) 00:21:59.558 RDMA CM Service: 1 (RDMA_CM) 00:21:59.558 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:59.558 [2024-05-15 00:08:28.716009] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:21:59.558 [2024-05-15 00:08:28.716046] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598306 ] 00:21:59.558 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.558 [2024-05-15 00:08:28.762540] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:59.558 [2024-05-15 00:08:28.762623] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:59.558 [2024-05-15 00:08:28.762649] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:59.558 [2024-05-15 00:08:28.762657] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:59.558 [2024-05-15 00:08:28.762687] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:59.558 [2024-05-15 00:08:28.781547] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:59.558 [2024-05-15 00:08:28.797606] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:59.558 [2024-05-15 00:08:28.797622] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:59.558 [2024-05-15 00:08:28.797632] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797641] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797649] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797656] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797664] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797672] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797680] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797688] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797695] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797703] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.558 [2024-05-15 00:08:28.797711] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797718] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797731] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797739] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797747] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797755] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797763] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797770] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797778] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797786] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797794] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797801] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797817] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797824] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797832] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797840] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797848] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797855] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797863] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797871] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.797878] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:59.559 [2024-05-15 00:08:28.797890] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:59.559 [2024-05-15 00:08:28.797896] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:59.559 [2024-05-15 00:08:28.801956] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.801977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x188600 00:21:59.559 [2024-05-15 00:08:28.809937] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.809953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.809963] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.809974] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.559 [2024-05-15 00:08:28.809984] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:59.559 [2024-05-15 00:08:28.809993] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:59.559 [2024-05-15 00:08:28.810010] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810045] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810064] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:59.559 [2024-05-15 00:08:28.810073] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:59.559 [2024-05-15 00:08:28.810093] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810125] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810143] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:59.559 [2024-05-15 00:08:28.810151] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810161] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810172] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810203] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810243] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810255] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810284] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810301] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:59.559 [2024-05-15 00:08:28.810308] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810433] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:59.559 [2024-05-15 00:08:28.810440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810457] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810489] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810506] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.559 [2024-05-15 00:08:28.810514] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810526] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810555] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810571] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.559 [2024-05-15 00:08:28.810579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:59.559 [2024-05-15 00:08:28.810586] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:59.559 [2024-05-15 00:08:28.810612] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.559 [2024-05-15 00:08:28.810626] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:21:59.559 [2024-05-15 00:08:28.810685] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:59.559 [2024-05-15 00:08:28.810705] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:59.559 [2024-05-15 00:08:28.810713] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:59.559 [2024-05-15 00:08:28.810720] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:59.559 [2024-05-15 00:08:28.810726] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:59.559 [2024-05-15 00:08:28.810733] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:59.559 [2024-05-15 00:08:28.810741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:59.559 [2024-05-15 00:08:28.810748] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810758] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.559 [2024-05-15 00:08:28.810776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.559 [2024-05-15 00:08:28.810788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.559 [2024-05-15 00:08:28.810811] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.559 [2024-05-15 00:08:28.810820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.810830] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.810840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.560 [2024-05-15 00:08:28.810850] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.810859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.560 [2024-05-15 00:08:28.810868] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.810877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.560 [2024-05-15 00:08:28.810887] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.810895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.560 [2024-05-15 00:08:28.810903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.810925] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.810993] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811008] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811046] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811064] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:59.560 [2024-05-15 00:08:28.811073] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811081] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811095] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811107] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811118] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811155] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811251] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811264] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811279] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x188600 00:21:59.560 [2024-05-15 00:08:28.811339] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811366] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:59.560 [2024-05-15 00:08:28.811383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811392] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811404] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811417] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:21:59.560 [2024-05-15 00:08:28.811465] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811503] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811514] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811527] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:21:59.560 [2024-05-15 00:08:28.811565] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811586] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811595] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811604] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811618] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811640] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811649] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:59.560 [2024-05-15 00:08:28.811656] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:59.560 [2024-05-15 00:08:28.811664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:59.560 [2024-05-15 00:08:28.811685] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811708] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.560 [2024-05-15 00:08:28.811736] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811755] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811763] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811779] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811791] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811824] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811841] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811853] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811889] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.811906] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811944] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.811957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.560 [2024-05-15 00:08:28.811978] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.560 [2024-05-15 00:08:28.811991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:21:59.560 [2024-05-15 00:08:28.812001] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.812018] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.812030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x188600 00:21:59.560 [2024-05-15 00:08:28.812043] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:21:59.560 [2024-05-15 00:08:28.812054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x188600 00:21:59.560 [2024-05-15 00:08:28.812066] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x188600 00:21:59.561 [2024-05-15 00:08:28.812076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x188600 00:21:59.561 [2024-05-15 00:08:28.812089] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x188600 00:21:59.561 [2024-05-15 00:08:28.812099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x188600 00:21:59.561 [2024-05-15 00:08:28.812112] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.561 [2024-05-15 00:08:28.812121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:59.561 [2024-05-15 00:08:28.812139] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x188600 00:21:59.561 [2024-05-15 00:08:28.812149] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.561 [2024-05-15 00:08:28.812157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:59.561 [2024-05-15 00:08:28.812169] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x188600 00:21:59.561 [2024-05-15 00:08:28.812179] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.561 [2024-05-15 00:08:28.812187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:59.561 [2024-05-15 00:08:28.812199] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x188600 00:21:59.561 [2024-05-15 00:08:28.812208] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.561 [2024-05-15 00:08:28.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:59.561 [2024-05-15 00:08:28.812245] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x188600 00:21:59.561 ===================================================== 00:21:59.561 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.561 ===================================================== 00:21:59.561 Controller Capabilities/Features 00:21:59.561 ================================ 00:21:59.561 Vendor ID: 8086 00:21:59.561 Subsystem Vendor ID: 8086 00:21:59.561 Serial Number: SPDK00000000000001 00:21:59.561 Model Number: SPDK bdev Controller 00:21:59.561 Firmware Version: 24.05 00:21:59.561 Recommended Arb Burst: 6 00:21:59.561 IEEE OUI Identifier: e4 d2 5c 00:21:59.561 Multi-path I/O 00:21:59.561 May have multiple subsystem ports: Yes 00:21:59.561 May have multiple controllers: Yes 00:21:59.561 Associated with SR-IOV VF: No 00:21:59.561 Max Data Transfer Size: 131072 00:21:59.561 Max Number of Namespaces: 32 00:21:59.561 Max Number of I/O Queues: 127 00:21:59.561 NVMe Specification Version (VS): 1.3 00:21:59.561 NVMe Specification Version (Identify): 1.3 00:21:59.561 Maximum Queue Entries: 128 00:21:59.561 Contiguous Queues Required: Yes 00:21:59.561 Arbitration Mechanisms Supported 00:21:59.561 Weighted Round Robin: Not Supported 00:21:59.561 Vendor Specific: Not Supported 00:21:59.561 Reset Timeout: 15000 ms 00:21:59.561 Doorbell Stride: 4 bytes 00:21:59.561 NVM Subsystem Reset: Not Supported 00:21:59.561 Command Sets Supported 00:21:59.561 NVM Command Set: Supported 00:21:59.561 Boot Partition: Not Supported 00:21:59.561 Memory Page Size Minimum: 4096 bytes 00:21:59.561 Memory Page Size Maximum: 4096 bytes 00:21:59.561 Persistent Memory Region: Not Supported 00:21:59.561 Optional Asynchronous Events Supported 00:21:59.561 Namespace Attribute Notices: Supported 00:21:59.561 Firmware Activation Notices: Not Supported 00:21:59.561 ANA Change Notices: Not Supported 00:21:59.561 PLE Aggregate Log Change Notices: Not Supported 00:21:59.561 LBA Status Info Alert Notices: Not Supported 00:21:59.561 EGE Aggregate Log Change Notices: Not Supported 00:21:59.561 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.561 Zone Descriptor Change Notices: Not Supported 00:21:59.561 Discovery Log Change Notices: Not Supported 00:21:59.561 Controller Attributes 00:21:59.561 128-bit Host Identifier: Supported 00:21:59.561 Non-Operational Permissive Mode: Not Supported 00:21:59.561 NVM Sets: Not Supported 00:21:59.561 Read Recovery Levels: Not Supported 00:21:59.561 Endurance Groups: Not Supported 00:21:59.561 Predictable Latency Mode: Not Supported 00:21:59.561 Traffic Based Keep ALive: Not Supported 00:21:59.561 Namespace Granularity: Not Supported 00:21:59.561 SQ Associations: Not Supported 00:21:59.561 UUID List: Not Supported 00:21:59.561 Multi-Domain Subsystem: Not Supported 00:21:59.561 Fixed Capacity Management: Not Supported 00:21:59.561 Variable Capacity Management: Not Supported 00:21:59.561 Delete Endurance Group: Not Supported 00:21:59.561 Delete NVM Set: Not Supported 00:21:59.561 Extended LBA Formats Supported: Not Supported 00:21:59.561 Flexible Data Placement Supported: Not Supported 00:21:59.561 00:21:59.561 Controller Memory Buffer Support 00:21:59.561 ================================ 00:21:59.561 Supported: No 00:21:59.561 00:21:59.561 Persistent Memory Region Support 00:21:59.561 ================================ 00:21:59.561 Supported: No 00:21:59.561 00:21:59.561 Admin Command Set Attributes 00:21:59.561 ============================ 00:21:59.561 Security Send/Receive: Not Supported 00:21:59.561 Format NVM: Not Supported 00:21:59.561 Firmware Activate/Download: Not Supported 00:21:59.561 Namespace Management: Not Supported 00:21:59.561 Device Self-Test: Not Supported 00:21:59.561 Directives: Not Supported 00:21:59.561 NVMe-MI: Not Supported 00:21:59.561 Virtualization Management: Not Supported 00:21:59.561 Doorbell Buffer Config: Not Supported 00:21:59.561 Get LBA Status Capability: Not Supported 00:21:59.561 Command & Feature Lockdown Capability: Not Supported 00:21:59.561 Abort Command Limit: 4 00:21:59.561 Async Event Request Limit: 4 00:21:59.561 Number of Firmware Slots: N/A 00:21:59.561 Firmware Slot 1 Read-Only: N/A 00:21:59.561 Firmware Activation Without Reset: N/A 00:21:59.561 Multiple Update Detection Support: N/A 00:21:59.561 Firmware Update Granularity: No Information Provided 00:21:59.561 Per-Namespace SMART Log: No 00:21:59.561 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.561 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:59.561 Command Effects Log Page: Supported 00:21:59.561 Get Log Page Extended Data: Supported 00:21:59.561 Telemetry Log Pages: Not Supported 00:21:59.561 Persistent Event Log Pages: Not Supported 00:21:59.561 Supported Log Pages Log Page: May Support 00:21:59.561 Commands Supported & Effects Log Page: Not Supported 00:21:59.561 Feature Identifiers & Effects Log Page:May Support 00:21:59.561 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.561 Data Area 4 for Telemetry Log: Not Supported 00:21:59.561 Error Log Page Entries Supported: 128 00:21:59.561 Keep Alive: Supported 00:21:59.561 Keep Alive Granularity: 10000 ms 00:21:59.561 00:21:59.561 NVM Command Set Attributes 00:21:59.561 ========================== 00:21:59.561 Submission Queue Entry Size 00:21:59.561 Max: 64 00:21:59.561 Min: 64 00:21:59.561 Completion Queue Entry Size 00:21:59.561 Max: 16 00:21:59.561 Min: 16 00:21:59.561 Number of Namespaces: 32 00:21:59.561 Compare Command: Supported 00:21:59.561 Write Uncorrectable Command: Not Supported 00:21:59.561 Dataset Management Command: Supported 00:21:59.561 Write Zeroes Command: Supported 00:21:59.561 Set Features Save Field: Not Supported 00:21:59.561 Reservations: Supported 00:21:59.561 Timestamp: Not Supported 00:21:59.561 Copy: Supported 00:21:59.561 Volatile Write Cache: Present 00:21:59.561 Atomic Write Unit (Normal): 1 00:21:59.561 Atomic Write Unit (PFail): 1 00:21:59.561 Atomic Compare & Write Unit: 1 00:21:59.561 Fused Compare & Write: Supported 00:21:59.561 Scatter-Gather List 00:21:59.561 SGL Command Set: Supported 00:21:59.561 SGL Keyed: Supported 00:21:59.561 SGL Bit Bucket Descriptor: Not Supported 00:21:59.561 SGL Metadata Pointer: Not Supported 00:21:59.561 Oversized SGL: Not Supported 00:21:59.561 SGL Metadata Address: Not Supported 00:21:59.561 SGL Offset: Supported 00:21:59.561 Transport SGL Data Block: Not Supported 00:21:59.561 Replay Protected Memory Block: Not Supported 00:21:59.561 00:21:59.561 Firmware Slot Information 00:21:59.561 ========================= 00:21:59.561 Active slot: 1 00:21:59.561 Slot 1 Firmware Revision: 24.05 00:21:59.561 00:21:59.561 00:21:59.561 Commands Supported and Effects 00:21:59.561 ============================== 00:21:59.561 Admin Commands 00:21:59.561 -------------- 00:21:59.561 Get Log Page (02h): Supported 00:21:59.561 Identify (06h): Supported 00:21:59.561 Abort (08h): Supported 00:21:59.561 Set Features (09h): Supported 00:21:59.561 Get Features (0Ah): Supported 00:21:59.561 Asynchronous Event Request (0Ch): Supported 00:21:59.561 Keep Alive (18h): Supported 00:21:59.561 I/O Commands 00:21:59.561 ------------ 00:21:59.561 Flush (00h): Supported LBA-Change 00:21:59.561 Write (01h): Supported LBA-Change 00:21:59.561 Read (02h): Supported 00:21:59.561 Compare (05h): Supported 00:21:59.561 Write Zeroes (08h): Supported LBA-Change 00:21:59.561 Dataset Management (09h): Supported LBA-Change 00:21:59.561 Copy (19h): Supported LBA-Change 00:21:59.561 Unknown (79h): Supported LBA-Change 00:21:59.561 Unknown (7Ah): Supported 00:21:59.561 00:21:59.561 Error Log 00:21:59.562 ========= 00:21:59.562 00:21:59.562 Arbitration 00:21:59.562 =========== 00:21:59.562 Arbitration Burst: 1 00:21:59.562 00:21:59.562 Power Management 00:21:59.562 ================ 00:21:59.562 Number of Power States: 1 00:21:59.562 Current Power State: Power State #0 00:21:59.562 Power State #0: 00:21:59.562 Max Power: 0.00 W 00:21:59.562 Non-Operational State: Operational 00:21:59.562 Entry Latency: Not Reported 00:21:59.562 Exit Latency: Not Reported 00:21:59.562 Relative Read Throughput: 0 00:21:59.562 Relative Read Latency: 0 00:21:59.562 Relative Write Throughput: 0 00:21:59.562 Relative Write Latency: 0 00:21:59.562 Idle Power: Not Reported 00:21:59.562 Active Power: Not Reported 00:21:59.562 Non-Operational Permissive Mode: Not Supported 00:21:59.562 00:21:59.562 Health Information 00:21:59.562 ================== 00:21:59.562 Critical Warnings: 00:21:59.562 Available Spare Space: OK 00:21:59.562 Temperature: OK 00:21:59.562 Device Reliability: OK 00:21:59.562 Read Only: No 00:21:59.562 Volatile Memory Backup: OK 00:21:59.562 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:59.562 Temperature Threshold: [2024-05-15 00:08:28.812347] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812382] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812399] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812435] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:59.562 [2024-05-15 00:08:28.812456] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21008 doesn't match qid 00:21:59.562 [2024-05-15 00:08:28.812475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:5 sqhd:8ef0 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812484] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21008 doesn't match qid 00:21:59.562 [2024-05-15 00:08:28.812495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:5 sqhd:8ef0 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812504] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21008 doesn't match qid 00:21:59.562 [2024-05-15 00:08:28.812514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:5 sqhd:8ef0 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812523] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21008 doesn't match qid 00:21:59.562 [2024-05-15 00:08:28.812533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:5 sqhd:8ef0 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812575] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812595] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812615] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812635] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812651] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:59.562 [2024-05-15 00:08:28.812659] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:59.562 [2024-05-15 00:08:28.812666] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812678] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812706] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812723] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812768] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812788] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812830] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812846] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812858] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812884] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.812901] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812936] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.812949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.812971] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.812995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813004] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813017] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813048] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813066] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813078] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813110] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813128] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813141] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813177] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813198] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813227] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813256] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813273] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813301] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813329] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813346] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813358] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.562 [2024-05-15 00:08:28.813369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.562 [2024-05-15 00:08:28.813390] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.562 [2024-05-15 00:08:28.813399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:59.562 [2024-05-15 00:08:28.813407] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813447] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813464] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813476] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813508] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813567] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813588] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813637] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813654] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813666] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813694] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813710] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813722] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813752] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813769] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813781] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813811] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813828] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813839] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.813867] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.813876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.813884] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.813907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.817948] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.817963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.817972] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.817987] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.817999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:59.563 [2024-05-15 00:08:28.818019] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:59.563 [2024-05-15 00:08:28.818028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000f p:0 m:0 dnr:0 00:21:59.563 [2024-05-15 00:08:28.818037] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x188600 00:21:59.563 [2024-05-15 00:08:28.818046] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:59.563 0 Kelvin (-273 Celsius) 00:21:59.563 Available Spare: 0% 00:21:59.563 Available Spare Threshold: 0% 00:21:59.563 Life Percentage Used: 0% 00:21:59.563 Data Units Read: 0 00:21:59.563 Data Units Written: 0 00:21:59.563 Host Read Commands: 0 00:21:59.563 Host Write Commands: 0 00:21:59.563 Controller Busy Time: 0 minutes 00:21:59.563 Power Cycles: 0 00:21:59.563 Power On Hours: 0 hours 00:21:59.563 Unsafe Shutdowns: 0 00:21:59.563 Unrecoverable Media Errors: 0 00:21:59.563 Lifetime Error Log Entries: 0 00:21:59.563 Warning Temperature Time: 0 minutes 00:21:59.563 Critical Temperature Time: 0 minutes 00:21:59.563 00:21:59.563 Number of Queues 00:21:59.563 ================ 00:21:59.563 Number of I/O Submission Queues: 127 00:21:59.563 Number of I/O Completion Queues: 127 00:21:59.563 00:21:59.563 Active Namespaces 00:21:59.563 ================= 00:21:59.563 Namespace ID:1 00:21:59.563 Error Recovery Timeout: Unlimited 00:21:59.563 Command Set Identifier: NVM (00h) 00:21:59.563 Deallocate: Supported 00:21:59.563 Deallocated/Unwritten Error: Not Supported 00:21:59.563 Deallocated Read Value: Unknown 00:21:59.563 Deallocate in Write Zeroes: Not Supported 00:21:59.563 Deallocated Guard Field: 0xFFFF 00:21:59.563 Flush: Supported 00:21:59.563 Reservation: Supported 00:21:59.563 Namespace Sharing Capabilities: Multiple Controllers 00:21:59.563 Size (in LBAs): 131072 (0GiB) 00:21:59.563 Capacity (in LBAs): 131072 (0GiB) 00:21:59.563 Utilization (in LBAs): 131072 (0GiB) 00:21:59.563 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:59.563 EUI64: ABCDEF0123456789 00:21:59.563 UUID: 7acefd28-87f0-4279-8a43-20c5e3d6c972 00:21:59.563 Thin Provisioning: Not Supported 00:21:59.563 Per-NS Atomic Units: Yes 00:21:59.563 Atomic Boundary Size (Normal): 0 00:21:59.563 Atomic Boundary Size (PFail): 0 00:21:59.563 Atomic Boundary Offset: 0 00:21:59.563 Maximum Single Source Range Length: 65535 00:21:59.563 Maximum Copy Length: 65535 00:21:59.563 Maximum Source Range Count: 1 00:21:59.563 NGUID/EUI64 Never Reused: No 00:21:59.563 Namespace Write Protected: No 00:21:59.563 Number of LBA Formats: 1 00:21:59.563 Current LBA Format: LBA Format #00 00:21:59.563 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:59.563 00:21:59.563 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:59.563 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.563 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.563 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.564 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:59.564 rmmod nvme_rdma 00:21:59.564 rmmod nvme_fabrics 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 598247 ']' 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 598247 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 598247 ']' 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 598247 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 598247 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 598247' 00:21:59.821 killing process with pid 598247 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@965 -- # kill 598247 00:21:59.821 [2024-05-15 00:08:28.931670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:59.821 00:08:28 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@970 -- # wait 598247 00:21:59.821 [2024-05-15 00:08:29.016148] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:00.079 00:08:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.079 00:08:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:00.079 00:22:00.079 real 0m4.045s 00:22:00.079 user 0m5.288s 00:22:00.079 sys 0m2.204s 00:22:00.079 00:08:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:00.079 00:08:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:00.079 ************************************ 00:22:00.079 END TEST nvmf_identify 00:22:00.079 ************************************ 00:22:00.079 00:08:29 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:00.079 00:08:29 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:00.079 00:08:29 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:00.079 00:08:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:00.079 ************************************ 00:22:00.079 START TEST nvmf_perf 00:22:00.079 ************************************ 00:22:00.079 00:08:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:00.079 * Looking for test storage... 00:22:00.336 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.336 00:08:29 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.337 00:08:29 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.863 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:02.864 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:02.864 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:02.864 Found net devices under 0000:09:00.0: mlx_0_0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:02.864 Found net devices under 0000:09:00.1: mlx_0_1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:02.864 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.864 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:22:02.864 altname enp9s0f0np0 00:22:02.864 inet 192.168.100.8/24 scope global mlx_0_0 00:22:02.864 valid_lft forever preferred_lft forever 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:02.864 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.864 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:22:02.864 altname enp9s0f1np1 00:22:02.864 inet 192.168.100.9/24 scope global mlx_0_1 00:22:02.864 valid_lft forever preferred_lft forever 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.864 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:02.865 192.168.100.9' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:02.865 192.168.100.9' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:02.865 192.168.100.9' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=600385 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 600385 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 600385 ']' 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:02.865 00:08:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.865 [2024-05-15 00:08:31.974641] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:22:02.865 [2024-05-15 00:08:31.974723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.865 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.865 [2024-05-15 00:08:32.056099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.865 [2024-05-15 00:08:32.174716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.865 [2024-05-15 00:08:32.174771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.865 [2024-05-15 00:08:32.174788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.865 [2024-05-15 00:08:32.174802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.865 [2024-05-15 00:08:32.174814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.865 [2024-05-15 00:08:32.174893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.865 [2024-05-15 00:08:32.174967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.865 [2024-05-15 00:08:32.174990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.865 [2024-05-15 00:08:32.174993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:03.123 00:08:32 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:06.400 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:06.400 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:06.400 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:06.400 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:06.658 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:06.658 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:06.658 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:06.658 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:06.658 00:08:35 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:06.916 [2024-05-15 00:08:36.190723] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:06.916 [2024-05-15 00:08:36.214103] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x680eb0/0x68ebc0) succeed. 00:22:06.916 [2024-05-15 00:08:36.226203] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6824f0/0x72ecc0) succeed. 00:22:07.173 00:08:36 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.434 00:08:36 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:07.434 00:08:36 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.692 00:08:36 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:07.692 00:08:36 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:07.949 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:08.206 [2024-05-15 00:08:37.327568] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:08.206 [2024-05-15 00:08:37.327895] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:08.206 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:08.466 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:08.466 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:08.466 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:08.466 00:08:37 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:09.850 Initializing NVMe Controllers 00:22:09.850 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:09.850 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:09.850 Initialization complete. Launching workers. 00:22:09.850 ======================================================== 00:22:09.850 Latency(us) 00:22:09.850 Device Information : IOPS MiB/s Average min max 00:22:09.850 PCIE (0000:88:00.0) NSID 1 from core 0: 83938.32 327.88 380.57 32.49 5277.16 00:22:09.850 ======================================================== 00:22:09.850 Total : 83938.32 327.88 380.57 32.49 5277.16 00:22:09.850 00:22:09.850 00:08:38 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:09.850 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.126 Initializing NVMe Controllers 00:22:13.126 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.126 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.126 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:13.126 Initialization complete. Launching workers. 00:22:13.126 ======================================================== 00:22:13.126 Latency(us) 00:22:13.126 Device Information : IOPS MiB/s Average min max 00:22:13.126 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5567.95 21.75 178.61 62.46 5077.94 00:22:13.126 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4469.56 17.46 222.60 87.35 5053.42 00:22:13.126 ======================================================== 00:22:13.126 Total : 10037.51 39.21 198.19 62.46 5077.94 00:22:13.126 00:22:13.127 00:08:42 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:13.127 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.407 Initializing NVMe Controllers 00:22:16.407 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.407 Initialization complete. Launching workers. 00:22:16.407 ======================================================== 00:22:16.407 Latency(us) 00:22:16.407 Device Information : IOPS MiB/s Average min max 00:22:16.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14332.97 55.99 2233.30 636.95 9397.41 00:22:16.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3992.47 15.60 8074.23 7638.02 12254.63 00:22:16.407 ======================================================== 00:22:16.407 Total : 18325.44 71.58 3505.83 636.95 12254.63 00:22:16.407 00:22:16.407 00:08:45 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:22:16.407 00:08:45 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:16.407 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.588 Initializing NVMe Controllers 00:22:20.588 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.588 Controller IO queue size 128, less than required. 00:22:20.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.588 Controller IO queue size 128, less than required. 00:22:20.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.588 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.588 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:20.588 Initialization complete. Launching workers. 00:22:20.588 ======================================================== 00:22:20.588 Latency(us) 00:22:20.588 Device Information : IOPS MiB/s Average min max 00:22:20.588 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2957.00 739.25 43560.55 19617.10 100230.69 00:22:20.588 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3131.00 782.75 40351.03 18631.88 67477.76 00:22:20.588 ======================================================== 00:22:20.588 Total : 6088.00 1522.00 41909.92 18631.88 100230.69 00:22:20.588 00:22:20.845 00:08:49 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:22:20.845 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.103 No valid NVMe controllers or AIO or URING devices found 00:22:21.103 Initializing NVMe Controllers 00:22:21.103 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.103 Controller IO queue size 128, less than required. 00:22:21.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.103 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:21.103 Controller IO queue size 128, less than required. 00:22:21.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.103 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:21.103 WARNING: Some requested NVMe devices were skipped 00:22:21.103 00:08:50 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:22:21.103 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.388 Initializing NVMe Controllers 00:22:26.388 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.388 Controller IO queue size 128, less than required. 00:22:26.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.388 Controller IO queue size 128, less than required. 00:22:26.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.388 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.388 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.388 Initialization complete. Launching workers. 00:22:26.388 00:22:26.388 ==================== 00:22:26.388 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:26.388 RDMA transport: 00:22:26.388 dev name: mlx5_0 00:22:26.388 polls: 324114 00:22:26.388 idle_polls: 321763 00:22:26.388 completions: 34394 00:22:26.388 queued_requests: 1 00:22:26.388 total_send_wrs: 17197 00:22:26.388 send_doorbell_updates: 2118 00:22:26.388 total_recv_wrs: 17324 00:22:26.388 recv_doorbell_updates: 2121 00:22:26.388 --------------------------------- 00:22:26.388 00:22:26.388 ==================== 00:22:26.388 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:26.388 RDMA transport: 00:22:26.388 dev name: mlx5_0 00:22:26.388 polls: 326395 00:22:26.388 idle_polls: 326128 00:22:26.388 completions: 17246 00:22:26.388 queued_requests: 1 00:22:26.388 total_send_wrs: 8623 00:22:26.388 send_doorbell_updates: 250 00:22:26.388 total_recv_wrs: 8750 00:22:26.388 recv_doorbell_updates: 251 00:22:26.388 --------------------------------- 00:22:26.388 ======================================================== 00:22:26.388 Latency(us) 00:22:26.388 Device Information : IOPS MiB/s Average min max 00:22:26.388 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4299.00 1074.75 29863.84 14416.43 71370.24 00:22:26.388 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2155.50 538.87 59215.66 32165.64 88416.73 00:22:26.388 ======================================================== 00:22:26.388 Total : 6454.49 1613.62 39665.97 14416.43 88416.73 00:22:26.388 00:22:26.388 00:08:54 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:26.388 00:08:54 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:26.388 rmmod nvme_rdma 00:22:26.388 rmmod nvme_fabrics 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 600385 ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 600385 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 600385 ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 600385 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 600385 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 600385' 00:22:26.388 killing process with pid 600385 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@965 -- # kill 600385 00:22:26.388 [2024-05-15 00:08:55.127160] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:26.388 00:08:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@970 -- # wait 600385 00:22:26.388 [2024-05-15 00:08:55.183478] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:27.760 00:08:56 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.760 00:08:56 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:27.760 00:22:27.760 real 0m27.503s 00:22:27.760 user 1m40.306s 00:22:27.760 sys 0m2.879s 00:22:27.760 00:08:56 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:27.760 00:08:56 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.760 ************************************ 00:22:27.760 END TEST nvmf_perf 00:22:27.760 ************************************ 00:22:27.760 00:08:56 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:27.760 00:08:56 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:27.760 00:08:56 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:27.760 00:08:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:27.760 ************************************ 00:22:27.760 START TEST nvmf_fio_host 00:22:27.760 ************************************ 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:27.760 * Looking for test storage... 00:22:27.760 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.760 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.761 00:08:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.290 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:30.291 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:30.291 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:30.291 Found net devices under 0000:09:00.0: mlx_0_0 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:30.291 Found net devices under 0000:09:00.1: mlx_0_1 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:30.291 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:30.291 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:30.291 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:22:30.291 altname enp9s0f0np0 00:22:30.292 inet 192.168.100.8/24 scope global mlx_0_0 00:22:30.292 valid_lft forever preferred_lft forever 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:30.292 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:30.292 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:22:30.292 altname enp9s0f1np1 00:22:30.292 inet 192.168.100.9/24 scope global mlx_0_1 00:22:30.292 valid_lft forever preferred_lft forever 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:30.292 192.168.100.9' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:30.292 192.168.100.9' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:30.292 192.168.100.9' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=605289 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 605289 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 605289 ']' 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.292 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.292 [2024-05-15 00:08:59.423045] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:22:30.292 [2024-05-15 00:08:59.423124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.292 [2024-05-15 00:08:59.494897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.292 [2024-05-15 00:08:59.601280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.292 [2024-05-15 00:08:59.601341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.292 [2024-05-15 00:08:59.601369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.292 [2024-05-15 00:08:59.601381] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.292 [2024-05-15 00:08:59.601391] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.292 [2024-05-15 00:08:59.601443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.292 [2024-05-15 00:08:59.601468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.292 [2024-05-15 00:08:59.601525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.292 [2024-05-15 00:08:59.601527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.550 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.551 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:22:30.551 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:30.551 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.551 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.551 [2024-05-15 00:08:59.762719] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1645a20/0x1649f10) succeed. 00:22:30.551 [2024-05-15 00:08:59.779348] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1647060/0x168b5a0) succeed. 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 Malloc1 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 00:08:59 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 [2024-05-15 00:09:00.009611] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:30.810 [2024-05-15 00:09:00.009907] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.810 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.811 00:09:00 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:31.073 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:31.073 fio-3.35 00:22:31.073 Starting 1 thread 00:22:31.073 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.630 00:22:33.630 test: (groupid=0, jobs=1): err= 0: pid=605540: Wed May 15 00:09:02 2024 00:22:33.630 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2004msec) 00:22:33.630 slat (nsec): min=1795, max=34206, avg=2102.89, stdev=1091.75 00:22:33.630 clat (usec): min=1764, max=8241, avg=4595.52, stdev=156.83 00:22:33.630 lat (usec): min=1775, max=8243, avg=4597.63, stdev=156.78 00:22:33.630 clat percentiles (usec): 00:22:33.630 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4490], 20.00th=[ 4555], 00:22:33.630 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:22:33.630 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4686], 95.00th=[ 4752], 00:22:33.630 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5735], 99.95th=[ 7570], 00:22:33.630 | 99.99th=[ 8225] 00:22:33.630 bw ( KiB/s): min=54152, max=55888, per=100.00%, avg=55382.00, stdev=823.04, samples=4 00:22:33.630 iops : min=13538, max=13972, avg=13845.50, stdev=205.76, samples=4 00:22:33.630 write: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2004msec); 0 zone resets 00:22:33.630 slat (nsec): min=1876, max=37061, avg=2351.14, stdev=1431.89 00:22:33.630 clat (usec): min=1779, max=8249, avg=4594.92, stdev=165.98 00:22:33.630 lat (usec): min=1785, max=8251, avg=4597.27, stdev=165.95 00:22:33.630 clat percentiles (usec): 00:22:33.630 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4490], 20.00th=[ 4555], 00:22:33.630 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:22:33.630 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4686], 95.00th=[ 4752], 00:22:33.630 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 6915], 99.95th=[ 7635], 00:22:33.630 | 99.99th=[ 8225] 00:22:33.630 bw ( KiB/s): min=54392, max=55856, per=99.93%, avg=55360.00, stdev=665.65, samples=4 00:22:33.630 iops : min=13598, max=13964, avg=13840.00, stdev=166.41, samples=4 00:22:33.630 lat (msec) : 2=0.02%, 4=0.24%, 10=99.74% 00:22:33.630 cpu : usr=99.30%, sys=0.10%, ctx=15, majf=0, minf=12 00:22:33.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:33.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:33.630 issued rwts: total=27744,27754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:33.630 00:22:33.630 Run status group 0 (all jobs): 00:22:33.630 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2004-2004msec 00:22:33.630 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2004-2004msec 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:33.630 00:09:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:33.630 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:33.630 fio-3.35 00:22:33.630 Starting 1 thread 00:22:33.630 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.159 00:22:36.159 test: (groupid=0, jobs=1): err= 0: pid=605955: Wed May 15 00:09:05 2024 00:22:36.159 read: IOPS=10.3k, BW=161MiB/s (169MB/s)(320MiB/1990msec) 00:22:36.159 slat (nsec): min=2857, max=32610, avg=3378.24, stdev=1322.42 00:22:36.159 clat (usec): min=369, max=11435, avg=2180.41, stdev=1246.68 00:22:36.159 lat (usec): min=372, max=11442, avg=2183.79, stdev=1247.02 00:22:36.159 clat percentiles (usec): 00:22:36.159 | 1.00th=[ 775], 5.00th=[ 1090], 10.00th=[ 1287], 20.00th=[ 1483], 00:22:36.159 | 30.00th=[ 1631], 40.00th=[ 1762], 50.00th=[ 1893], 60.00th=[ 2040], 00:22:36.159 | 70.00th=[ 2245], 80.00th=[ 2507], 90.00th=[ 3032], 95.00th=[ 4490], 00:22:36.159 | 99.00th=[ 8291], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[11338], 00:22:36.159 | 99.99th=[11469] 00:22:36.159 bw ( KiB/s): min=76672, max=86688, per=49.47%, avg=81520.00, stdev=4175.11, samples=4 00:22:36.159 iops : min= 4792, max= 5418, avg=5095.00, stdev=260.94, samples=4 00:22:36.159 write: IOPS=5589, BW=87.3MiB/s (91.6MB/s)(166MiB/1902msec); 0 zone resets 00:22:36.159 slat (nsec): min=31105, max=77131, avg=34441.76, stdev=4587.77 00:22:36.159 clat (usec): min=6126, max=25429, avg=18321.05, stdev=2391.38 00:22:36.159 lat (usec): min=6159, max=25464, avg=18355.49, stdev=2391.47 00:22:36.159 clat percentiles (usec): 00:22:36.159 | 1.00th=[ 9503], 5.00th=[14877], 10.00th=[15664], 20.00th=[16581], 00:22:36.159 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:22:36.159 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21365], 95.00th=[21890], 00:22:36.159 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24773], 99.95th=[25035], 00:22:36.159 | 99.99th=[25297] 00:22:36.159 bw ( KiB/s): min=79552, max=87424, per=94.31%, avg=84352.00, stdev=3405.36, samples=4 00:22:36.159 iops : min= 4972, max= 5464, avg=5272.00, stdev=212.83, samples=4 00:22:36.159 lat (usec) : 500=0.03%, 750=0.53%, 1000=1.61% 00:22:36.159 lat (msec) : 2=35.56%, 4=24.52%, 10=3.90%, 20=25.86%, 50=7.99% 00:22:36.159 cpu : usr=97.71%, sys=0.95%, ctx=142, majf=0, minf=16 00:22:36.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:36.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:36.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:36.159 issued rwts: total=20497,10632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:36.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:36.159 00:22:36.159 Run status group 0 (all jobs): 00:22:36.159 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=320MiB (336MB), run=1990-1990msec 00:22:36.159 WRITE: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=166MiB (174MB), run=1902-1902msec 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:36.159 rmmod nvme_rdma 00:22:36.159 rmmod nvme_fabrics 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 605289 ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 605289 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 605289 ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 605289 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 605289 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 605289' 00:22:36.159 killing process with pid 605289 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 605289 00:22:36.159 [2024-05-15 00:09:05.236812] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:36.159 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 605289 00:22:36.159 [2024-05-15 00:09:05.325375] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:36.417 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:36.417 00:22:36.417 real 0m8.695s 00:22:36.417 user 0m29.379s 00:22:36.417 sys 0m2.317s 00:22:36.417 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.417 00:09:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.417 ************************************ 00:22:36.417 END TEST nvmf_fio_host 00:22:36.417 ************************************ 00:22:36.417 00:09:05 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:36.417 00:09:05 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:36.417 00:09:05 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:36.417 00:09:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:36.417 ************************************ 00:22:36.417 START TEST nvmf_failover 00:22:36.417 ************************************ 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:36.417 * Looking for test storage... 00:22:36.417 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.417 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.418 00:09:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.418 00:09:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.945 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:38.946 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:38.946 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:38.946 Found net devices under 0000:09:00.0: mlx_0_0 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:38.946 Found net devices under 0000:09:00.1: mlx_0_1 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:38.946 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:39.205 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:39.205 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:22:39.205 altname enp9s0f0np0 00:22:39.205 inet 192.168.100.8/24 scope global mlx_0_0 00:22:39.205 valid_lft forever preferred_lft forever 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:39.205 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:39.205 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:22:39.205 altname enp9s0f1np1 00:22:39.205 inet 192.168.100.9/24 scope global mlx_0_1 00:22:39.205 valid_lft forever preferred_lft forever 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:39.205 192.168.100.9' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:39.205 192.168.100.9' 00:22:39.205 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:39.206 192.168.100.9' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=608657 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 608657 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 608657 ']' 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:39.206 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.206 [2024-05-15 00:09:08.409986] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:22:39.206 [2024-05-15 00:09:08.410070] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.206 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.206 [2024-05-15 00:09:08.484434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:39.464 [2024-05-15 00:09:08.590373] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.464 [2024-05-15 00:09:08.590422] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.464 [2024-05-15 00:09:08.590452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.464 [2024-05-15 00:09:08.590464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.464 [2024-05-15 00:09:08.590473] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.464 [2024-05-15 00:09:08.590567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.464 [2024-05-15 00:09:08.590628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.464 [2024-05-15 00:09:08.590631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.464 00:09:08 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:39.722 [2024-05-15 00:09:09.001845] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x93a160/0x93e650) succeed. 00:22:39.722 [2024-05-15 00:09:09.012311] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x93b700/0x97fce0) succeed. 00:22:39.980 00:09:09 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:40.238 Malloc0 00:22:40.238 00:09:09 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.495 00:09:09 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.754 00:09:09 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:41.012 [2024-05-15 00:09:10.151724] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:41.012 [2024-05-15 00:09:10.152091] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:41.012 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:41.269 [2024-05-15 00:09:10.388730] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:41.269 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:41.526 [2024-05-15 00:09:10.633551] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=609091 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 609091 /var/tmp/bdevperf.sock 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 609091 ']' 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:41.526 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.784 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.784 00:09:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:41.784 00:09:10 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.042 NVMe0n1 00:22:42.042 00:09:11 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.300 00:22:42.300 00:09:11 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=609224 00:22:42.300 00:09:11 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.300 00:09:11 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:43.672 00:09:12 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:43.672 00:09:12 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:46.949 00:09:15 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.949 00:22:46.949 00:09:16 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:47.207 00:09:16 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:50.483 00:09:19 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:50.483 [2024-05-15 00:09:19.800308] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:50.483 00:09:19 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:51.854 00:09:20 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:51.854 00:09:21 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 609224 00:22:58.440 0 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 609091 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 609091 ']' 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 609091 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 609091 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 609091' 00:22:58.440 killing process with pid 609091 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 609091 00:22:58.440 00:09:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 609091 00:22:58.440 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.440 [2024-05-15 00:09:10.690502] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:22:58.440 [2024-05-15 00:09:10.690596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609091 ] 00:22:58.440 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.440 [2024-05-15 00:09:10.762457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.440 [2024-05-15 00:09:10.870662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.440 Running I/O for 15 seconds... 00:22:58.440 [2024-05-15 00:09:13.870898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.440 [2024-05-15 00:09:13.870974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.440 [2024-05-15 00:09:13.870993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.440 [2024-05-15 00:09:13.871007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.440 [2024-05-15 00:09:13.871021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.440 [2024-05-15 00:09:13.871034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.440 [2024-05-15 00:09:13.871049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.440 [2024-05-15 00:09:13.871062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.440 [2024-05-15 00:09:13.872529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.440 [2024-05-15 00:09:13.872560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.440 [2024-05-15 00:09:13.872601] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:58.440 [2024-05-15 00:09:13.872617] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:58.440 [2024-05-15 00:09:13.872658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187400 00:22:58.440 [2024-05-15 00:09:13.872678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.872741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.872807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.872826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.872870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.872889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.872959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.872988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.873970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.873995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.874928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.874986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.875069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.875113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.875134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.441 [2024-05-15 00:09:13.875178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187400 00:22:58.441 [2024-05-15 00:09:13.875197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.875987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.876943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.876992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187400 00:22:58.442 [2024-05-15 00:09:13.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.442 [2024-05-15 00:09:13.877713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.877732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.877776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.877795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.877857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.877900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.877942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.877990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.879925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.879987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.880008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.880053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.880071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.443 [2024-05-15 00:09:13.880115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187400 00:22:58.443 [2024-05-15 00:09:13.880135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.880925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.880982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.881002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.881047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187400 00:22:58.444 [2024-05-15 00:09:13.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.881115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:13.881134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:3603ff60 sqhd:0030 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.897295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.444 [2024-05-15 00:09:13.897320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.444 [2024-05-15 00:09:13.897348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129032 len:8 PRP1 0x0 PRP2 0x0 00:22:58.444 [2024-05-15 00:09:13.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:13.897460] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:22:58.444 [2024-05-15 00:09:13.897484] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.444 [2024-05-15 00:09:13.897522] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.444 [2024-05-15 00:09:13.901439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.444 [2024-05-15 00:09:13.947996] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.444 [2024-05-15 00:09:17.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.444 [2024-05-15 00:09:17.509896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.444 [2024-05-15 00:09:17.509926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.509948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.509964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.509979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.509995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.445 [2024-05-15 00:09:17.510346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187400 00:22:58.445 [2024-05-15 00:09:17.510748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.445 [2024-05-15 00:09:17.510763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.510977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.510993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.446 [2024-05-15 00:09:17.511094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.446 [2024-05-15 00:09:17.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.446 [2024-05-15 00:09:17.511156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.446 [2024-05-15 00:09:17.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.446 [2024-05-15 00:09:17.511889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187400 00:22:58.446 [2024-05-15 00:09:17.511902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.511941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.511964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.511982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.511996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.447 [2024-05-15 00:09:17.512939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.512973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.512988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.513002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.513017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.513032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.513048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.513062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.447 [2024-05-15 00:09:17.513077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187400 00:22:58.447 [2024-05-15 00:09:17.513091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.513276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:17.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.514661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.448 [2024-05-15 00:09:17.514690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.448 [2024-05-15 00:09:17.514704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30936 len:8 PRP1 0x0 PRP2 0x0 00:22:58.448 [2024-05-15 00:09:17.514718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:17.514784] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:22:58.448 [2024-05-15 00:09:17.514820] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:22:58.448 [2024-05-15 00:09:17.514834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.448 [2024-05-15 00:09:17.518182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.448 [2024-05-15 00:09:17.534278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.448 [2024-05-15 00:09:17.583685] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.448 [2024-05-15 00:09:22.072940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.448 [2024-05-15 00:09:22.073653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187400 00:22:58.448 [2024-05-15 00:09:22.073794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.448 [2024-05-15 00:09:22.073809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.073976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.073991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187400 00:22:58.449 [2024-05-15 00:09:22.074717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.449 [2024-05-15 00:09:22.074732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.074745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.074773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.074980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.074995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.450 [2024-05-15 00:09:22.075720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.075764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.075794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.075823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187400 00:22:58.450 [2024-05-15 00:09:22.075852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.450 [2024-05-15 00:09:22.075868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187400 00:22:58.451 [2024-05-15 00:09:22.075882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.075897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187400 00:22:58.451 [2024-05-15 00:09:22.075911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.075934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187400 00:22:58.451 [2024-05-15 00:09:22.075950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.075965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.075979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.075995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.451 [2024-05-15 00:09:22.076651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.076666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187400 00:22:58.451 [2024-05-15 00:09:22.076680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:0150 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.078105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.451 [2024-05-15 00:09:22.078127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.451 [2024-05-15 00:09:22.078140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47160 len:8 PRP1 0x0 PRP2 0x0 00:22:58.451 [2024-05-15 00:09:22.078153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.451 [2024-05-15 00:09:22.078225] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:22:58.451 [2024-05-15 00:09:22.078245] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:22:58.451 [2024-05-15 00:09:22.078258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.451 [2024-05-15 00:09:22.081512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.451 [2024-05-15 00:09:22.098777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.451 [2024-05-15 00:09:22.147173] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.451 00:22:58.451 Latency(us) 00:22:58.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.451 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:58.451 Verification LBA range: start 0x0 length 0x4000 00:22:58.451 NVMe0n1 : 15.01 11257.79 43.98 222.05 0.00 11120.90 485.45 1037701.88 00:22:58.451 =================================================================================================================== 00:22:58.451 Total : 11257.79 43.98 222.05 0.00 11120.90 485.45 1037701.88 00:22:58.451 Received shutdown signal, test time was about 15.000000 seconds 00:22:58.451 00:22:58.451 Latency(us) 00:22:58.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.451 =================================================================================================================== 00:22:58.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=611066 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 611066 /var/tmp/bdevperf.sock 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 611066 ']' 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:58.451 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:58.452 [2024-05-15 00:09:27.699124] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:58.452 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:58.710 [2024-05-15 00:09:27.972028] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:58.710 00:09:27 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.276 NVMe0n1 00:22:59.276 00:09:28 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.276 00:22:59.534 00:09:28 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.792 00:22:59.792 00:09:28 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.792 00:09:28 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:00.049 00:09:29 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.307 00:09:29 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:03.586 00:09:32 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.586 00:09:32 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:03.586 00:09:32 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=611731 00:23:03.586 00:09:32 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.586 00:09:32 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 611731 00:23:04.519 0 00:23:04.519 00:09:33 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.519 [2024-05-15 00:09:27.133854] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:23:04.519 [2024-05-15 00:09:27.133961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611066 ] 00:23:04.519 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.519 [2024-05-15 00:09:27.202348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.519 [2024-05-15 00:09:27.307980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.519 [2024-05-15 00:09:29.420328] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:04.519 [2024-05-15 00:09:29.420876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.519 [2024-05-15 00:09:29.420923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.519 [2024-05-15 00:09:29.443246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:04.519 [2024-05-15 00:09:29.459577] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.519 Running I/O for 1 seconds... 00:23:04.519 00:23:04.519 Latency(us) 00:23:04.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.519 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:04.519 Verification LBA range: start 0x0 length 0x4000 00:23:04.519 NVMe0n1 : 1.01 14128.34 55.19 0.00 0.00 9006.32 3349.62 19515.16 00:23:04.519 =================================================================================================================== 00:23:04.519 Total : 14128.34 55.19 0.00 0.00 9006.32 3349.62 19515.16 00:23:04.519 00:09:33 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.519 00:09:33 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:04.777 00:09:34 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.035 00:09:34 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:05.035 00:09:34 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:05.292 00:09:34 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.550 00:09:34 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:08.829 00:09:37 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.829 00:09:37 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 611066 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 611066 ']' 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 611066 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 611066 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 611066' 00:23:08.829 killing process with pid 611066 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 611066 00:23:08.829 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 611066 00:23:09.087 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:09.087 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:09.652 rmmod nvme_rdma 00:23:09.652 rmmod nvme_fabrics 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 608657 ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 608657 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 608657 ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 608657 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 608657 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 608657' 00:23:09.652 killing process with pid 608657 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 608657 00:23:09.652 [2024-05-15 00:09:38.778028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:09.652 00:09:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 608657 00:23:09.652 [2024-05-15 00:09:38.851140] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:09.910 00:09:39 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.910 00:09:39 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:09.910 00:23:09.910 real 0m33.464s 00:23:09.910 user 2m5.337s 00:23:09.910 sys 0m4.250s 00:23:09.910 00:09:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:09.910 00:09:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:09.910 ************************************ 00:23:09.910 END TEST nvmf_failover 00:23:09.910 ************************************ 00:23:09.910 00:09:39 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:09.910 00:09:39 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:09.910 00:09:39 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:09.910 00:09:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:09.910 ************************************ 00:23:09.910 START TEST nvmf_host_discovery 00:23:09.910 ************************************ 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:09.910 * Looking for test storage... 00:23:09.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.910 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:09.911 00:09:39 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.169 00:09:39 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.169 00:09:39 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:10.170 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:23:10.170 00:23:10.170 real 0m0.067s 00:23:10.170 user 0m0.028s 00:23:10.170 sys 0m0.044s 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.170 ************************************ 00:23:10.170 END TEST nvmf_host_discovery 00:23:10.170 ************************************ 00:23:10.170 00:09:39 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:10.170 00:09:39 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:10.170 00:09:39 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:10.170 00:09:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:10.170 ************************************ 00:23:10.170 START TEST nvmf_host_multipath_status 00:23:10.170 ************************************ 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:10.170 * Looking for test storage... 00:23:10.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.170 00:09:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:23:12.703 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:23:12.703 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:23:12.703 Found net devices under 0000:09:00.0: mlx_0_0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:23:12.703 Found net devices under 0000:09:00.1: mlx_0_1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:12.703 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.703 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:23:12.703 altname enp9s0f0np0 00:23:12.703 inet 192.168.100.8/24 scope global mlx_0_0 00:23:12.703 valid_lft forever preferred_lft forever 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:12.703 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.703 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:23:12.703 altname enp9s0f1np1 00:23:12.703 inet 192.168.100.9/24 scope global mlx_0_1 00:23:12.703 valid_lft forever preferred_lft forever 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:12.703 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:12.704 192.168.100.9' 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:12.704 192.168.100.9' 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:12.704 192.168.100.9' 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:23:12.704 00:09:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=614535 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 614535 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 614535 ']' 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.704 00:09:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:12.962 [2024-05-15 00:09:42.062701] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:23:12.962 [2024-05-15 00:09:42.062795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.962 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.962 [2024-05-15 00:09:42.136032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:12.962 [2024-05-15 00:09:42.250876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.962 [2024-05-15 00:09:42.250956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.962 [2024-05-15 00:09:42.250973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.962 [2024-05-15 00:09:42.250986] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.962 [2024-05-15 00:09:42.250998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.962 [2024-05-15 00:09:42.251081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.962 [2024-05-15 00:09:42.251088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=614535 00:23:13.895 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:14.174 [2024-05-15 00:09:43.336590] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8673d0/0x86b8c0) succeed. 00:23:14.174 [2024-05-15 00:09:43.348133] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8688d0/0x8acf50) succeed. 00:23:14.174 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:14.445 Malloc0 00:23:14.445 00:09:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:14.701 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.265 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:15.265 [2024-05-15 00:09:44.582190] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.265 [2024-05-15 00:09:44.582567] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:15.265 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:15.522 [2024-05-15 00:09:44.859147] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=614832 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 614832 /var/tmp/bdevperf.sock 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 614832 ']' 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.779 00:09:44 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:16.037 00:09:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.037 00:09:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:16.037 00:09:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:16.294 00:09:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:16.551 Nvme0n1 00:23:16.551 00:09:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:16.809 Nvme0n1 00:23:16.809 00:09:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:16.809 00:09:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:19.335 00:09:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:19.335 00:09:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:19.335 00:09:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:19.335 00:09:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.708 00:09:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.966 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.966 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.966 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.966 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:21.224 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.224 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:21.224 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.224 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.482 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.482 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.482 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.482 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.740 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.740 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:21.740 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.740 00:09:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.998 00:09:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.998 00:09:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:21.998 00:09:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:22.256 00:09:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:22.514 00:09:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:23.449 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:23.449 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:23.449 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.449 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:23.706 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.707 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:23.707 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.707 00:09:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.964 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.964 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.964 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.964 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.221 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.221 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:24.221 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.221 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.480 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.480 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:24.480 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.480 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.738 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.738 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:24.738 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.738 00:09:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.995 00:09:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.995 00:09:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:24.995 00:09:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:25.253 00:09:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:25.510 00:09:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:26.444 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:26.444 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:26.444 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.444 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.702 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.702 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:26.702 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.702 00:09:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.960 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.961 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.961 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.961 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.218 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.218 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.218 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.218 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.476 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.476 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.476 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.476 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.734 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.734 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:27.734 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.734 00:09:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:27.992 00:09:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.992 00:09:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:27.992 00:09:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:28.250 00:09:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:28.508 00:09:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:29.442 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:29.442 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:29.442 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.442 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.700 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.700 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:29.700 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.700 00:09:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.990 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.990 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.990 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.990 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.249 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.249 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.249 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.249 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.506 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.506 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.506 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.506 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.764 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.764 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:30.764 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.764 00:09:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.022 00:10:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.022 00:10:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:31.022 00:10:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:31.279 00:10:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:31.537 00:10:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:32.468 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:32.468 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:32.468 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.468 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.725 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.725 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:32.725 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.725 00:10:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.983 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.983 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.983 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.983 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.240 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.240 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.240 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.240 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.498 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.498 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:33.498 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.498 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.755 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.755 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:33.755 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.755 00:10:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.013 00:10:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.013 00:10:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:34.013 00:10:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:34.271 00:10:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:34.528 00:10:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:35.461 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:35.461 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:35.461 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.461 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.718 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.718 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:35.718 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.718 00:10:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.976 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.976 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.976 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.976 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.235 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.235 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.235 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.235 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:36.493 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.493 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:36.493 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.493 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.751 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.751 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:36.751 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.751 00:10:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.009 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.009 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:37.267 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:37.267 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:37.267 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:37.525 00:10:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:38.899 00:10:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:38.899 00:10:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:38.899 00:10:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.899 00:10:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.899 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.899 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:38.899 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.899 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.157 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.157 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.157 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.157 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.415 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.415 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.415 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.415 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.674 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.674 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.674 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.674 00:10:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.931 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.931 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.931 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.932 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.189 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.189 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:40.189 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:40.448 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:40.707 00:10:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:41.640 00:10:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:41.640 00:10:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:41.640 00:10:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.640 00:10:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.898 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.898 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:41.898 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.899 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.157 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.157 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.157 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.157 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.415 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.415 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.415 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.415 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.673 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.673 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.673 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.673 00:10:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.931 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.931 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.931 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.931 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.190 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.190 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:43.190 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:43.449 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:43.707 00:10:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:44.669 00:10:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:44.669 00:10:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.669 00:10:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.669 00:10:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.927 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.927 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:44.927 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.927 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.185 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.185 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.185 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.185 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.443 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.443 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.443 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.443 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.701 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.701 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.701 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.701 00:10:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.959 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.959 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:45.959 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.959 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.217 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.217 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:46.217 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:46.475 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:46.733 00:10:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:47.666 00:10:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:47.666 00:10:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:47.666 00:10:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.666 00:10:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.924 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.924 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.924 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.924 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.182 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.182 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.182 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.182 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.440 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.440 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.440 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.440 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.698 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.698 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.698 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.698 00:10:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.956 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.957 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.957 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.957 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 614832 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 614832 ']' 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 614832 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 614832 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 614832' 00:23:49.215 killing process with pid 614832 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 614832 00:23:49.215 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 614832 00:23:49.215 Connection closed with partial response: 00:23:49.215 00:23:49.215 00:23:49.477 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 614832 00:23:49.477 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:49.477 [2024-05-15 00:09:44.913822] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:23:49.477 [2024-05-15 00:09:44.913902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614832 ] 00:23:49.477 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.478 [2024-05-15 00:09:44.983400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.478 [2024-05-15 00:09:45.094004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.478 Running I/O for 90 seconds... 00:23:49.478 [2024-05-15 00:10:00.401762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.401812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.401881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.401901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.401942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.401959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.401976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.401991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.478 [2024-05-15 00:10:00.402889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.402978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.402995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.403010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:49.478 [2024-05-15 00:10:00.403026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187400 00:23:49.478 [2024-05-15 00:10:00.403040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.403972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.403992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187400 00:23:49.479 [2024-05-15 00:10:00.404227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:49.479 [2024-05-15 00:10:00.404243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.404982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.404999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.480 [2024-05-15 00:10:00.405044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:49.480 [2024-05-15 00:10:00.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187400 00:23:49.480 [2024-05-15 00:10:00.405481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.405966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.405986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.406001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:00.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:00.406449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.821919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.821999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.822683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.822863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.822902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.822958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.822990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.823069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.481 [2024-05-15 00:10:15.823132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:49.481 [2024-05-15 00:10:15.823244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187400 00:23:49.481 [2024-05-15 00:10:15.823258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.823964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.823996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187400 00:23:49.482 [2024-05-15 00:10:15.824435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:49.482 [2024-05-15 00:10:15.824452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.482 [2024-05-15 00:10:15.824466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.483 [2024-05-15 00:10:15.824720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:49.483 [2024-05-15 00:10:15.824891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187400 00:23:49.483 [2024-05-15 00:10:15.824906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:49.483 Received shutdown signal, test time was about 32.120442 seconds 00:23:49.483 00:23:49.483 Latency(us) 00:23:49.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.483 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.483 Verification LBA range: start 0x0 length 0x4000 00:23:49.483 Nvme0n1 : 32.12 12336.31 48.19 0.00 0.00 10351.52 71.68 4026531.84 00:23:49.483 =================================================================================================================== 00:23:49.483 Total : 12336.31 48.19 0.00 0.00 10351.52 71.68 4026531.84 00:23:49.483 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.741 00:10:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:49.741 rmmod nvme_rdma 00:23:49.741 rmmod nvme_fabrics 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 614535 ']' 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 614535 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 614535 ']' 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 614535 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 614535 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 614535' 00:23:49.741 killing process with pid 614535 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 614535 00:23:49.741 [2024-05-15 00:10:19.035905] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:49.741 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 614535 00:23:50.000 [2024-05-15 00:10:19.096149] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:50.258 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.258 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:50.258 00:23:50.258 real 0m40.062s 00:23:50.258 user 2m9.359s 00:23:50.258 sys 0m6.222s 00:23:50.258 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:50.258 00:10:19 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:50.258 ************************************ 00:23:50.258 END TEST nvmf_host_multipath_status 00:23:50.259 ************************************ 00:23:50.259 00:10:19 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:50.259 ************************************ 00:23:50.259 START TEST nvmf_discovery_remove_ifc 00:23:50.259 ************************************ 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:50.259 * Looking for test storage... 00:23:50.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:50.259 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:23:50.259 00:23:50.259 real 0m0.064s 00:23:50.259 user 0m0.025s 00:23:50.259 sys 0m0.045s 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:50.259 00:10:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.259 ************************************ 00:23:50.259 END TEST nvmf_discovery_remove_ifc 00:23:50.259 ************************************ 00:23:50.259 00:10:19 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:50.259 00:10:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:50.259 ************************************ 00:23:50.259 START TEST nvmf_identify_kernel_target 00:23:50.259 ************************************ 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:50.259 * Looking for test storage... 00:23:50.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.259 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.260 00:10:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.790 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:23:52.791 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:23:52.791 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:23:52.791 Found net devices under 0000:09:00.0: mlx_0_0 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:23:52.791 Found net devices under 0000:09:00.1: mlx_0_1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:52.791 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.791 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:23:52.791 altname enp9s0f0np0 00:23:52.791 inet 192.168.100.8/24 scope global mlx_0_0 00:23:52.791 valid_lft forever preferred_lft forever 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:52.791 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.791 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:23:52.791 altname enp9s0f1np1 00:23:52.791 inet 192.168.100.9/24 scope global mlx_0_1 00:23:52.791 valid_lft forever preferred_lft forever 00:23:52.791 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:52.792 192.168.100.9' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:52.792 192.168.100.9' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:52.792 192.168.100.9' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.792 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:53.050 00:10:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:54.425 Waiting for block devices as requested 00:23:54.425 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:54.425 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:54.425 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:54.425 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:54.425 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:54.683 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:54.683 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:54.683 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:54.683 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:54.683 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:54.941 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:54.941 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:54.941 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:55.199 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:55.199 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:55.199 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:55.199 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:55.457 No valid GPT data, bailing 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -t rdma -s 4420 00:23:55.457 00:23:55.457 Discovery Log Number of Records 2, Generation counter 2 00:23:55.457 =====Discovery Log Entry 0====== 00:23:55.457 trtype: rdma 00:23:55.457 adrfam: ipv4 00:23:55.457 subtype: current discovery subsystem 00:23:55.457 treq: not specified, sq flow control disable supported 00:23:55.457 portid: 1 00:23:55.457 trsvcid: 4420 00:23:55.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:55.457 traddr: 192.168.100.8 00:23:55.457 eflags: none 00:23:55.457 rdma_prtype: not specified 00:23:55.457 rdma_qptype: connected 00:23:55.457 rdma_cms: rdma-cm 00:23:55.457 rdma_pkey: 0x0000 00:23:55.457 =====Discovery Log Entry 1====== 00:23:55.457 trtype: rdma 00:23:55.457 adrfam: ipv4 00:23:55.457 subtype: nvme subsystem 00:23:55.457 treq: not specified, sq flow control disable supported 00:23:55.457 portid: 1 00:23:55.457 trsvcid: 4420 00:23:55.457 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:55.457 traddr: 192.168.100.8 00:23:55.457 eflags: none 00:23:55.457 rdma_prtype: not specified 00:23:55.457 rdma_qptype: connected 00:23:55.457 rdma_cms: rdma-cm 00:23:55.457 rdma_pkey: 0x0000 00:23:55.457 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:23:55.458 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:55.458 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.717 ===================================================== 00:23:55.717 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.717 ===================================================== 00:23:55.717 Controller Capabilities/Features 00:23:55.717 ================================ 00:23:55.717 Vendor ID: 0000 00:23:55.717 Subsystem Vendor ID: 0000 00:23:55.717 Serial Number: cce486decb491b0ee66e 00:23:55.717 Model Number: Linux 00:23:55.717 Firmware Version: 6.7.0-68 00:23:55.717 Recommended Arb Burst: 0 00:23:55.717 IEEE OUI Identifier: 00 00 00 00:23:55.717 Multi-path I/O 00:23:55.717 May have multiple subsystem ports: No 00:23:55.717 May have multiple controllers: No 00:23:55.717 Associated with SR-IOV VF: No 00:23:55.717 Max Data Transfer Size: Unlimited 00:23:55.717 Max Number of Namespaces: 0 00:23:55.717 Max Number of I/O Queues: 1024 00:23:55.717 NVMe Specification Version (VS): 1.3 00:23:55.717 NVMe Specification Version (Identify): 1.3 00:23:55.717 Maximum Queue Entries: 128 00:23:55.717 Contiguous Queues Required: No 00:23:55.717 Arbitration Mechanisms Supported 00:23:55.717 Weighted Round Robin: Not Supported 00:23:55.717 Vendor Specific: Not Supported 00:23:55.717 Reset Timeout: 7500 ms 00:23:55.717 Doorbell Stride: 4 bytes 00:23:55.717 NVM Subsystem Reset: Not Supported 00:23:55.717 Command Sets Supported 00:23:55.717 NVM Command Set: Supported 00:23:55.717 Boot Partition: Not Supported 00:23:55.717 Memory Page Size Minimum: 4096 bytes 00:23:55.717 Memory Page Size Maximum: 4096 bytes 00:23:55.717 Persistent Memory Region: Not Supported 00:23:55.717 Optional Asynchronous Events Supported 00:23:55.717 Namespace Attribute Notices: Not Supported 00:23:55.717 Firmware Activation Notices: Not Supported 00:23:55.717 ANA Change Notices: Not Supported 00:23:55.717 PLE Aggregate Log Change Notices: Not Supported 00:23:55.717 LBA Status Info Alert Notices: Not Supported 00:23:55.717 EGE Aggregate Log Change Notices: Not Supported 00:23:55.717 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.717 Zone Descriptor Change Notices: Not Supported 00:23:55.717 Discovery Log Change Notices: Supported 00:23:55.717 Controller Attributes 00:23:55.717 128-bit Host Identifier: Not Supported 00:23:55.717 Non-Operational Permissive Mode: Not Supported 00:23:55.717 NVM Sets: Not Supported 00:23:55.717 Read Recovery Levels: Not Supported 00:23:55.717 Endurance Groups: Not Supported 00:23:55.717 Predictable Latency Mode: Not Supported 00:23:55.717 Traffic Based Keep ALive: Not Supported 00:23:55.717 Namespace Granularity: Not Supported 00:23:55.717 SQ Associations: Not Supported 00:23:55.717 UUID List: Not Supported 00:23:55.717 Multi-Domain Subsystem: Not Supported 00:23:55.717 Fixed Capacity Management: Not Supported 00:23:55.717 Variable Capacity Management: Not Supported 00:23:55.717 Delete Endurance Group: Not Supported 00:23:55.717 Delete NVM Set: Not Supported 00:23:55.717 Extended LBA Formats Supported: Not Supported 00:23:55.717 Flexible Data Placement Supported: Not Supported 00:23:55.717 00:23:55.717 Controller Memory Buffer Support 00:23:55.717 ================================ 00:23:55.717 Supported: No 00:23:55.717 00:23:55.717 Persistent Memory Region Support 00:23:55.717 ================================ 00:23:55.717 Supported: No 00:23:55.717 00:23:55.717 Admin Command Set Attributes 00:23:55.717 ============================ 00:23:55.717 Security Send/Receive: Not Supported 00:23:55.717 Format NVM: Not Supported 00:23:55.717 Firmware Activate/Download: Not Supported 00:23:55.717 Namespace Management: Not Supported 00:23:55.717 Device Self-Test: Not Supported 00:23:55.717 Directives: Not Supported 00:23:55.717 NVMe-MI: Not Supported 00:23:55.717 Virtualization Management: Not Supported 00:23:55.717 Doorbell Buffer Config: Not Supported 00:23:55.717 Get LBA Status Capability: Not Supported 00:23:55.717 Command & Feature Lockdown Capability: Not Supported 00:23:55.717 Abort Command Limit: 1 00:23:55.717 Async Event Request Limit: 1 00:23:55.717 Number of Firmware Slots: N/A 00:23:55.717 Firmware Slot 1 Read-Only: N/A 00:23:55.717 Firmware Activation Without Reset: N/A 00:23:55.717 Multiple Update Detection Support: N/A 00:23:55.718 Firmware Update Granularity: No Information Provided 00:23:55.718 Per-Namespace SMART Log: No 00:23:55.718 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.718 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.718 Command Effects Log Page: Not Supported 00:23:55.718 Get Log Page Extended Data: Supported 00:23:55.718 Telemetry Log Pages: Not Supported 00:23:55.718 Persistent Event Log Pages: Not Supported 00:23:55.718 Supported Log Pages Log Page: May Support 00:23:55.718 Commands Supported & Effects Log Page: Not Supported 00:23:55.718 Feature Identifiers & Effects Log Page:May Support 00:23:55.718 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.718 Data Area 4 for Telemetry Log: Not Supported 00:23:55.718 Error Log Page Entries Supported: 1 00:23:55.718 Keep Alive: Not Supported 00:23:55.718 00:23:55.718 NVM Command Set Attributes 00:23:55.718 ========================== 00:23:55.718 Submission Queue Entry Size 00:23:55.718 Max: 1 00:23:55.718 Min: 1 00:23:55.718 Completion Queue Entry Size 00:23:55.718 Max: 1 00:23:55.718 Min: 1 00:23:55.718 Number of Namespaces: 0 00:23:55.718 Compare Command: Not Supported 00:23:55.718 Write Uncorrectable Command: Not Supported 00:23:55.718 Dataset Management Command: Not Supported 00:23:55.718 Write Zeroes Command: Not Supported 00:23:55.718 Set Features Save Field: Not Supported 00:23:55.718 Reservations: Not Supported 00:23:55.718 Timestamp: Not Supported 00:23:55.718 Copy: Not Supported 00:23:55.718 Volatile Write Cache: Not Present 00:23:55.718 Atomic Write Unit (Normal): 1 00:23:55.718 Atomic Write Unit (PFail): 1 00:23:55.718 Atomic Compare & Write Unit: 1 00:23:55.718 Fused Compare & Write: Not Supported 00:23:55.718 Scatter-Gather List 00:23:55.718 SGL Command Set: Supported 00:23:55.718 SGL Keyed: Supported 00:23:55.718 SGL Bit Bucket Descriptor: Not Supported 00:23:55.718 SGL Metadata Pointer: Not Supported 00:23:55.718 Oversized SGL: Not Supported 00:23:55.718 SGL Metadata Address: Not Supported 00:23:55.718 SGL Offset: Supported 00:23:55.718 Transport SGL Data Block: Not Supported 00:23:55.718 Replay Protected Memory Block: Not Supported 00:23:55.718 00:23:55.718 Firmware Slot Information 00:23:55.718 ========================= 00:23:55.718 Active slot: 0 00:23:55.718 00:23:55.718 00:23:55.718 Error Log 00:23:55.718 ========= 00:23:55.718 00:23:55.718 Active Namespaces 00:23:55.718 ================= 00:23:55.718 Discovery Log Page 00:23:55.718 ================== 00:23:55.718 Generation Counter: 2 00:23:55.718 Number of Records: 2 00:23:55.718 Record Format: 0 00:23:55.718 00:23:55.718 Discovery Log Entry 0 00:23:55.718 ---------------------- 00:23:55.718 Transport Type: 1 (RDMA) 00:23:55.718 Address Family: 1 (IPv4) 00:23:55.718 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.718 Entry Flags: 00:23:55.718 Duplicate Returned Information: 0 00:23:55.718 Explicit Persistent Connection Support for Discovery: 0 00:23:55.718 Transport Requirements: 00:23:55.718 Secure Channel: Not Specified 00:23:55.718 Port ID: 1 (0x0001) 00:23:55.718 Controller ID: 65535 (0xffff) 00:23:55.718 Admin Max SQ Size: 32 00:23:55.718 Transport Service Identifier: 4420 00:23:55.718 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.718 Transport Address: 192.168.100.8 00:23:55.718 Transport Specific Address Subtype - RDMA 00:23:55.718 RDMA QP Service Type: 1 (Reliable Connected) 00:23:55.718 RDMA Provider Type: 1 (No provider specified) 00:23:55.718 RDMA CM Service: 1 (RDMA_CM) 00:23:55.718 Discovery Log Entry 1 00:23:55.718 ---------------------- 00:23:55.718 Transport Type: 1 (RDMA) 00:23:55.718 Address Family: 1 (IPv4) 00:23:55.718 Subsystem Type: 2 (NVM Subsystem) 00:23:55.718 Entry Flags: 00:23:55.718 Duplicate Returned Information: 0 00:23:55.718 Explicit Persistent Connection Support for Discovery: 0 00:23:55.718 Transport Requirements: 00:23:55.718 Secure Channel: Not Specified 00:23:55.718 Port ID: 1 (0x0001) 00:23:55.718 Controller ID: 65535 (0xffff) 00:23:55.718 Admin Max SQ Size: 32 00:23:55.718 Transport Service Identifier: 4420 00:23:55.718 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:55.718 Transport Address: 192.168.100.8 00:23:55.718 Transport Specific Address Subtype - RDMA 00:23:55.718 RDMA QP Service Type: 1 (Reliable Connected) 00:23:55.718 RDMA Provider Type: 1 (No provider specified) 00:23:55.718 RDMA CM Service: 1 (RDMA_CM) 00:23:55.718 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:55.718 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.718 get_feature(0x01) failed 00:23:55.718 get_feature(0x02) failed 00:23:55.718 get_feature(0x04) failed 00:23:55.718 ===================================================== 00:23:55.718 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:23:55.718 ===================================================== 00:23:55.718 Controller Capabilities/Features 00:23:55.718 ================================ 00:23:55.718 Vendor ID: 0000 00:23:55.718 Subsystem Vendor ID: 0000 00:23:55.718 Serial Number: 3f677c3b2b108011ddde 00:23:55.718 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:55.718 Firmware Version: 6.7.0-68 00:23:55.718 Recommended Arb Burst: 6 00:23:55.718 IEEE OUI Identifier: 00 00 00 00:23:55.718 Multi-path I/O 00:23:55.718 May have multiple subsystem ports: Yes 00:23:55.718 May have multiple controllers: Yes 00:23:55.718 Associated with SR-IOV VF: No 00:23:55.718 Max Data Transfer Size: 1048576 00:23:55.718 Max Number of Namespaces: 1024 00:23:55.718 Max Number of I/O Queues: 128 00:23:55.718 NVMe Specification Version (VS): 1.3 00:23:55.718 NVMe Specification Version (Identify): 1.3 00:23:55.718 Maximum Queue Entries: 128 00:23:55.718 Contiguous Queues Required: No 00:23:55.718 Arbitration Mechanisms Supported 00:23:55.718 Weighted Round Robin: Not Supported 00:23:55.718 Vendor Specific: Not Supported 00:23:55.718 Reset Timeout: 7500 ms 00:23:55.718 Doorbell Stride: 4 bytes 00:23:55.718 NVM Subsystem Reset: Not Supported 00:23:55.718 Command Sets Supported 00:23:55.718 NVM Command Set: Supported 00:23:55.718 Boot Partition: Not Supported 00:23:55.718 Memory Page Size Minimum: 4096 bytes 00:23:55.718 Memory Page Size Maximum: 4096 bytes 00:23:55.718 Persistent Memory Region: Not Supported 00:23:55.718 Optional Asynchronous Events Supported 00:23:55.718 Namespace Attribute Notices: Supported 00:23:55.718 Firmware Activation Notices: Not Supported 00:23:55.718 ANA Change Notices: Supported 00:23:55.718 PLE Aggregate Log Change Notices: Not Supported 00:23:55.718 LBA Status Info Alert Notices: Not Supported 00:23:55.718 EGE Aggregate Log Change Notices: Not Supported 00:23:55.718 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.718 Zone Descriptor Change Notices: Not Supported 00:23:55.718 Discovery Log Change Notices: Not Supported 00:23:55.718 Controller Attributes 00:23:55.718 128-bit Host Identifier: Supported 00:23:55.718 Non-Operational Permissive Mode: Not Supported 00:23:55.718 NVM Sets: Not Supported 00:23:55.718 Read Recovery Levels: Not Supported 00:23:55.718 Endurance Groups: Not Supported 00:23:55.718 Predictable Latency Mode: Not Supported 00:23:55.718 Traffic Based Keep ALive: Supported 00:23:55.718 Namespace Granularity: Not Supported 00:23:55.718 SQ Associations: Not Supported 00:23:55.718 UUID List: Not Supported 00:23:55.718 Multi-Domain Subsystem: Not Supported 00:23:55.718 Fixed Capacity Management: Not Supported 00:23:55.718 Variable Capacity Management: Not Supported 00:23:55.718 Delete Endurance Group: Not Supported 00:23:55.718 Delete NVM Set: Not Supported 00:23:55.718 Extended LBA Formats Supported: Not Supported 00:23:55.718 Flexible Data Placement Supported: Not Supported 00:23:55.718 00:23:55.718 Controller Memory Buffer Support 00:23:55.718 ================================ 00:23:55.718 Supported: No 00:23:55.718 00:23:55.718 Persistent Memory Region Support 00:23:55.718 ================================ 00:23:55.718 Supported: No 00:23:55.718 00:23:55.718 Admin Command Set Attributes 00:23:55.718 ============================ 00:23:55.718 Security Send/Receive: Not Supported 00:23:55.718 Format NVM: Not Supported 00:23:55.718 Firmware Activate/Download: Not Supported 00:23:55.718 Namespace Management: Not Supported 00:23:55.718 Device Self-Test: Not Supported 00:23:55.718 Directives: Not Supported 00:23:55.718 NVMe-MI: Not Supported 00:23:55.718 Virtualization Management: Not Supported 00:23:55.718 Doorbell Buffer Config: Not Supported 00:23:55.718 Get LBA Status Capability: Not Supported 00:23:55.718 Command & Feature Lockdown Capability: Not Supported 00:23:55.718 Abort Command Limit: 4 00:23:55.718 Async Event Request Limit: 4 00:23:55.718 Number of Firmware Slots: N/A 00:23:55.718 Firmware Slot 1 Read-Only: N/A 00:23:55.718 Firmware Activation Without Reset: N/A 00:23:55.718 Multiple Update Detection Support: N/A 00:23:55.718 Firmware Update Granularity: No Information Provided 00:23:55.718 Per-Namespace SMART Log: Yes 00:23:55.719 Asymmetric Namespace Access Log Page: Supported 00:23:55.719 ANA Transition Time : 10 sec 00:23:55.719 00:23:55.719 Asymmetric Namespace Access Capabilities 00:23:55.719 ANA Optimized State : Supported 00:23:55.719 ANA Non-Optimized State : Supported 00:23:55.719 ANA Inaccessible State : Supported 00:23:55.719 ANA Persistent Loss State : Supported 00:23:55.719 ANA Change State : Supported 00:23:55.719 ANAGRPID is not changed : No 00:23:55.719 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:55.719 00:23:55.719 ANA Group Identifier Maximum : 128 00:23:55.719 Number of ANA Group Identifiers : 128 00:23:55.719 Max Number of Allowed Namespaces : 1024 00:23:55.719 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:55.719 Command Effects Log Page: Supported 00:23:55.719 Get Log Page Extended Data: Supported 00:23:55.719 Telemetry Log Pages: Not Supported 00:23:55.719 Persistent Event Log Pages: Not Supported 00:23:55.719 Supported Log Pages Log Page: May Support 00:23:55.719 Commands Supported & Effects Log Page: Not Supported 00:23:55.719 Feature Identifiers & Effects Log Page:May Support 00:23:55.719 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.719 Data Area 4 for Telemetry Log: Not Supported 00:23:55.719 Error Log Page Entries Supported: 128 00:23:55.719 Keep Alive: Supported 00:23:55.719 Keep Alive Granularity: 1000 ms 00:23:55.719 00:23:55.719 NVM Command Set Attributes 00:23:55.719 ========================== 00:23:55.719 Submission Queue Entry Size 00:23:55.719 Max: 64 00:23:55.719 Min: 64 00:23:55.719 Completion Queue Entry Size 00:23:55.719 Max: 16 00:23:55.719 Min: 16 00:23:55.719 Number of Namespaces: 1024 00:23:55.719 Compare Command: Not Supported 00:23:55.719 Write Uncorrectable Command: Not Supported 00:23:55.719 Dataset Management Command: Supported 00:23:55.719 Write Zeroes Command: Supported 00:23:55.719 Set Features Save Field: Not Supported 00:23:55.719 Reservations: Not Supported 00:23:55.719 Timestamp: Not Supported 00:23:55.719 Copy: Not Supported 00:23:55.719 Volatile Write Cache: Present 00:23:55.719 Atomic Write Unit (Normal): 1 00:23:55.719 Atomic Write Unit (PFail): 1 00:23:55.719 Atomic Compare & Write Unit: 1 00:23:55.719 Fused Compare & Write: Not Supported 00:23:55.719 Scatter-Gather List 00:23:55.719 SGL Command Set: Supported 00:23:55.719 SGL Keyed: Supported 00:23:55.719 SGL Bit Bucket Descriptor: Not Supported 00:23:55.719 SGL Metadata Pointer: Not Supported 00:23:55.719 Oversized SGL: Not Supported 00:23:55.719 SGL Metadata Address: Not Supported 00:23:55.719 SGL Offset: Supported 00:23:55.719 Transport SGL Data Block: Not Supported 00:23:55.719 Replay Protected Memory Block: Not Supported 00:23:55.719 00:23:55.719 Firmware Slot Information 00:23:55.719 ========================= 00:23:55.719 Active slot: 0 00:23:55.719 00:23:55.719 Asymmetric Namespace Access 00:23:55.719 =========================== 00:23:55.719 Change Count : 0 00:23:55.719 Number of ANA Group Descriptors : 1 00:23:55.719 ANA Group Descriptor : 0 00:23:55.719 ANA Group ID : 1 00:23:55.719 Number of NSID Values : 1 00:23:55.719 Change Count : 0 00:23:55.719 ANA State : 1 00:23:55.719 Namespace Identifier : 1 00:23:55.719 00:23:55.719 Commands Supported and Effects 00:23:55.719 ============================== 00:23:55.719 Admin Commands 00:23:55.719 -------------- 00:23:55.719 Get Log Page (02h): Supported 00:23:55.719 Identify (06h): Supported 00:23:55.719 Abort (08h): Supported 00:23:55.719 Set Features (09h): Supported 00:23:55.719 Get Features (0Ah): Supported 00:23:55.719 Asynchronous Event Request (0Ch): Supported 00:23:55.719 Keep Alive (18h): Supported 00:23:55.719 I/O Commands 00:23:55.719 ------------ 00:23:55.719 Flush (00h): Supported 00:23:55.719 Write (01h): Supported LBA-Change 00:23:55.719 Read (02h): Supported 00:23:55.719 Write Zeroes (08h): Supported LBA-Change 00:23:55.719 Dataset Management (09h): Supported 00:23:55.719 00:23:55.719 Error Log 00:23:55.719 ========= 00:23:55.719 Entry: 0 00:23:55.719 Error Count: 0x3 00:23:55.719 Submission Queue Id: 0x0 00:23:55.719 Command Id: 0x5 00:23:55.719 Phase Bit: 0 00:23:55.719 Status Code: 0x2 00:23:55.719 Status Code Type: 0x0 00:23:55.719 Do Not Retry: 1 00:23:55.719 Error Location: 0x28 00:23:55.719 LBA: 0x0 00:23:55.719 Namespace: 0x0 00:23:55.719 Vendor Log Page: 0x0 00:23:55.719 ----------- 00:23:55.719 Entry: 1 00:23:55.719 Error Count: 0x2 00:23:55.719 Submission Queue Id: 0x0 00:23:55.719 Command Id: 0x5 00:23:55.719 Phase Bit: 0 00:23:55.719 Status Code: 0x2 00:23:55.719 Status Code Type: 0x0 00:23:55.719 Do Not Retry: 1 00:23:55.719 Error Location: 0x28 00:23:55.719 LBA: 0x0 00:23:55.719 Namespace: 0x0 00:23:55.719 Vendor Log Page: 0x0 00:23:55.719 ----------- 00:23:55.719 Entry: 2 00:23:55.719 Error Count: 0x1 00:23:55.719 Submission Queue Id: 0x0 00:23:55.719 Command Id: 0x0 00:23:55.719 Phase Bit: 0 00:23:55.719 Status Code: 0x2 00:23:55.719 Status Code Type: 0x0 00:23:55.719 Do Not Retry: 1 00:23:55.719 Error Location: 0x28 00:23:55.719 LBA: 0x0 00:23:55.719 Namespace: 0x0 00:23:55.719 Vendor Log Page: 0x0 00:23:55.719 00:23:55.719 Number of Queues 00:23:55.719 ================ 00:23:55.719 Number of I/O Submission Queues: 128 00:23:55.719 Number of I/O Completion Queues: 128 00:23:55.719 00:23:55.719 ZNS Specific Controller Data 00:23:55.719 ============================ 00:23:55.719 Zone Append Size Limit: 0 00:23:55.719 00:23:55.719 00:23:55.719 Active Namespaces 00:23:55.719 ================= 00:23:55.719 get_feature(0x05) failed 00:23:55.719 Namespace ID:1 00:23:55.719 Command Set Identifier: NVM (00h) 00:23:55.719 Deallocate: Supported 00:23:55.719 Deallocated/Unwritten Error: Not Supported 00:23:55.719 Deallocated Read Value: Unknown 00:23:55.719 Deallocate in Write Zeroes: Not Supported 00:23:55.719 Deallocated Guard Field: 0xFFFF 00:23:55.719 Flush: Supported 00:23:55.719 Reservation: Not Supported 00:23:55.719 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.719 Size (in LBAs): 1953525168 (931GiB) 00:23:55.719 Capacity (in LBAs): 1953525168 (931GiB) 00:23:55.719 Utilization (in LBAs): 1953525168 (931GiB) 00:23:55.719 UUID: 685b0a54-e0bd-47ca-a63a-adcb1ca1303e 00:23:55.719 Thin Provisioning: Not Supported 00:23:55.719 Per-NS Atomic Units: Yes 00:23:55.719 Atomic Boundary Size (Normal): 0 00:23:55.719 Atomic Boundary Size (PFail): 0 00:23:55.719 Atomic Boundary Offset: 0 00:23:55.719 NGUID/EUI64 Never Reused: No 00:23:55.719 ANA group ID: 1 00:23:55.719 Namespace Write Protected: No 00:23:55.719 Number of LBA Formats: 1 00:23:55.719 Current LBA Format: LBA Format #00 00:23:55.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.719 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.719 00:10:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:55.719 rmmod nvme_rdma 00:23:55.719 rmmod nvme_fabrics 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:23:55.719 00:10:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:57.094 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:57.094 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:57.094 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:58.467 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:58.467 00:23:58.467 real 0m7.993s 00:23:58.467 user 0m2.322s 00:23:58.467 sys 0m3.773s 00:23:58.467 00:10:27 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:58.467 00:10:27 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.467 ************************************ 00:23:58.467 END TEST nvmf_identify_kernel_target 00:23:58.467 ************************************ 00:23:58.467 00:10:27 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:58.467 00:10:27 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:58.467 00:10:27 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:58.467 00:10:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:58.467 ************************************ 00:23:58.467 START TEST nvmf_auth_host 00:23:58.467 ************************************ 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:58.467 * Looking for test storage... 00:23:58.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:58.467 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.468 00:10:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:24:01.000 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:24:01.000 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:24:01.000 Found net devices under 0000:09:00.0: mlx_0_0 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:24:01.000 Found net devices under 0000:09:00.1: mlx_0_1 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:01.000 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:01.001 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:01.001 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:24:01.001 altname enp9s0f0np0 00:24:01.001 inet 192.168.100.8/24 scope global mlx_0_0 00:24:01.001 valid_lft forever preferred_lft forever 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:01.001 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:01.001 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:24:01.001 altname enp9s0f1np1 00:24:01.001 inet 192.168.100.9/24 scope global mlx_0_1 00:24:01.001 valid_lft forever preferred_lft forever 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:01.001 192.168.100.9' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:01.001 192.168.100.9' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:01.001 192.168.100.9' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=625067 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 625067 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 625067 ']' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.001 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.604 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bc8b84026e51bcde2de4554f1791b90c 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7sX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bc8b84026e51bcde2de4554f1791b90c 0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bc8b84026e51bcde2de4554f1791b90c 0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bc8b84026e51bcde2de4554f1791b90c 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7sX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7sX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7sX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d7b49546fb7726635aa959643e6b9ef32b57978a14c0c3816745cbb66784580 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cId 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d7b49546fb7726635aa959643e6b9ef32b57978a14c0c3816745cbb66784580 3 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d7b49546fb7726635aa959643e6b9ef32b57978a14c0c3816745cbb66784580 3 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d7b49546fb7726635aa959643e6b9ef32b57978a14c0c3816745cbb66784580 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cId 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cId 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cId 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dbb47d7a0e7c97616d84ec3da8e70253c5f3b0003a54a444 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gzv 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dbb47d7a0e7c97616d84ec3da8e70253c5f3b0003a54a444 0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dbb47d7a0e7c97616d84ec3da8e70253c5f3b0003a54a444 0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dbb47d7a0e7c97616d84ec3da8e70253c5f3b0003a54a444 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gzv 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gzv 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gzv 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a4e494f1c40fe2441b1ab5cc830243ef6ce096f29ab97db 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JZp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a4e494f1c40fe2441b1ab5cc830243ef6ce096f29ab97db 2 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a4e494f1c40fe2441b1ab5cc830243ef6ce096f29ab97db 2 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a4e494f1c40fe2441b1ab5cc830243ef6ce096f29ab97db 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JZp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JZp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.JZp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=587f123e95dd25c7474bc4b8a992d2bf 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EVp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 587f123e95dd25c7474bc4b8a992d2bf 1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 587f123e95dd25c7474bc4b8a992d2bf 1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=587f123e95dd25c7474bc4b8a992d2bf 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EVp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EVp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EVp 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2a6d182cd11b3777619fc53072f26ba 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6br 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2a6d182cd11b3777619fc53072f26ba 1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2a6d182cd11b3777619fc53072f26ba 1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2a6d182cd11b3777619fc53072f26ba 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:01.605 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6br 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6br 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6br 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f793906c4655672939e79a5724f14db04e299d867d0ee98 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sui 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f793906c4655672939e79a5724f14db04e299d867d0ee98 2 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f793906c4655672939e79a5724f14db04e299d867d0ee98 2 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f793906c4655672939e79a5724f14db04e299d867d0ee98 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sui 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sui 00:24:01.863 00:10:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Sui 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c75cb6f7b3dcd90787de4159b38682f 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Di5 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c75cb6f7b3dcd90787de4159b38682f 0 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c75cb6f7b3dcd90787de4159b38682f 0 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c75cb6f7b3dcd90787de4159b38682f 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Di5 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Di5 00:24:01.863 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Di5 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a95276dd2dca9c12d4c0e53631a0f398ad3a0a0f23d71d72683fd16264a7821a 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZhP 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a95276dd2dca9c12d4c0e53631a0f398ad3a0a0f23d71d72683fd16264a7821a 3 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a95276dd2dca9c12d4c0e53631a0f398ad3a0a0f23d71d72683fd16264a7821a 3 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a95276dd2dca9c12d4c0e53631a0f398ad3a0a0f23d71d72683fd16264a7821a 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZhP 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZhP 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZhP 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 625067 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 625067 ']' 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.864 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7sX 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cId ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cId 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gzv 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.JZp ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JZp 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EVp 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6br ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6br 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Sui 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Di5 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Di5 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZhP 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.122 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:02.123 00:10:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:24:03.495 Waiting for block devices as requested 00:24:03.495 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:03.495 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:03.495 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:03.753 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:03.753 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:03.753 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:03.753 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:04.011 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:04.011 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:04.011 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:04.011 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:04.270 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:04.270 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:04.270 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:04.270 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:04.530 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:04.530 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:04.787 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:04.787 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:04.787 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:04.788 00:10:34 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:05.046 No valid GPT data, bailing 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -t rdma -s 4420 00:24:05.046 00:24:05.046 Discovery Log Number of Records 2, Generation counter 2 00:24:05.046 =====Discovery Log Entry 0====== 00:24:05.046 trtype: rdma 00:24:05.046 adrfam: ipv4 00:24:05.046 subtype: current discovery subsystem 00:24:05.046 treq: not specified, sq flow control disable supported 00:24:05.046 portid: 1 00:24:05.046 trsvcid: 4420 00:24:05.046 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:05.046 traddr: 192.168.100.8 00:24:05.046 eflags: none 00:24:05.046 rdma_prtype: not specified 00:24:05.046 rdma_qptype: connected 00:24:05.046 rdma_cms: rdma-cm 00:24:05.046 rdma_pkey: 0x0000 00:24:05.046 =====Discovery Log Entry 1====== 00:24:05.046 trtype: rdma 00:24:05.046 adrfam: ipv4 00:24:05.046 subtype: nvme subsystem 00:24:05.046 treq: not specified, sq flow control disable supported 00:24:05.046 portid: 1 00:24:05.046 trsvcid: 4420 00:24:05.046 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:05.046 traddr: 192.168.100.8 00:24:05.046 eflags: none 00:24:05.046 rdma_prtype: not specified 00:24:05.046 rdma_qptype: connected 00:24:05.046 rdma_cms: rdma-cm 00:24:05.046 rdma_pkey: 0x0000 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.046 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.304 nvme0n1 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.304 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.305 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.563 nvme0n1 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.563 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.564 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.822 nvme0n1 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.822 00:10:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.822 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.081 nvme0n1 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.081 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.339 nvme0n1 00:24:06.339 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.339 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.339 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.339 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.340 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.598 nvme0n1 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.598 00:10:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 nvme0n1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.857 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.115 nvme0n1 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.115 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.373 nvme0n1 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.373 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.374 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.632 nvme0n1 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.632 00:10:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.891 nvme0n1 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.891 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:08.149 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:08.150 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.150 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.150 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.407 nvme0n1 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.407 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.408 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.665 nvme0n1 00:24:08.665 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.665 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.665 00:10:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.665 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.665 00:10:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.665 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.922 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.923 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.180 nvme0n1 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.180 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.438 nvme0n1 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.438 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.695 00:10:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.954 nvme0n1 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.954 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.520 nvme0n1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.520 00:10:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.086 nvme0n1 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.086 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.344 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.910 nvme0n1 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.910 00:10:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.910 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.476 nvme0n1 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.476 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.477 00:10:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.044 nvme0n1 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.044 00:10:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.416 nvme0n1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.416 00:10:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.349 nvme0n1 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.349 00:10:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.338 nvme0n1 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.338 00:10:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 nvme0n1 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.273 00:10:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 nvme0n1 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.206 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.464 nvme0n1 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.464 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.465 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.723 nvme0n1 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.723 00:10:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.723 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.981 nvme0n1 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.981 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.239 nvme0n1 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.239 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.497 nvme0n1 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:19.497 00:10:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:19.498 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.498 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.498 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.755 nvme0n1 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.755 00:10:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.755 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.012 nvme0n1 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.012 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.013 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.271 nvme0n1 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.271 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.528 nvme0n1 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.528 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.786 00:10:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.786 nvme0n1 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.786 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.044 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.045 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.304 nvme0n1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.304 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 nvme0n1 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.562 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.563 00:10:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.129 nvme0n1 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.129 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.388 nvme0n1 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.388 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.645 nvme0n1 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.645 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.646 00:10:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.646 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.646 00:10:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:22.903 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.904 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.469 nvme0n1 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.469 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.470 00:10:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.036 nvme0n1 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.036 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.601 nvme0n1 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.601 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.602 00:10:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.169 nvme0n1 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.169 00:10:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.735 nvme0n1 00:24:25.735 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.736 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.994 00:10:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 nvme0n1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.927 00:10:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.861 nvme0n1 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.861 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.862 00:10:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 nvme0n1 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.236 00:10:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.169 nvme0n1 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.170 00:10:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.128 nvme0n1 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.128 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 nvme0n1 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.395 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.396 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.654 nvme0n1 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.654 00:11:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.912 nvme0n1 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:31.912 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.913 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.171 nvme0n1 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.171 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.172 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.430 nvme0n1 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.430 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.689 nvme0n1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.689 00:11:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.947 nvme0n1 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.947 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.205 nvme0n1 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.205 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.463 nvme0n1 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.463 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.722 00:11:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.722 nvme0n1 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.722 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.980 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.981 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.238 nvme0n1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.239 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.496 nvme0n1 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.496 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.755 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.756 00:11:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.014 nvme0n1 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.014 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.015 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.272 nvme0n1 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.272 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.530 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.531 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.789 nvme0n1 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.789 00:11:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.789 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.355 nvme0n1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.355 00:11:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.920 nvme0n1 00:24:36.920 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.920 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.920 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.920 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.920 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.178 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.179 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.745 nvme0n1 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.745 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.746 00:11:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.312 nvme0n1 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.312 00:11:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.877 nvme0n1 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.877 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM4Yjg0MDI2ZTUxYmNkZTJkZTQ1NTRmMTc5MWI5MGM42zFG: 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ3YjQ5NTQ2ZmI3NzI2NjM1YWE5NTk2NDNlNmI5ZWYzMmI1Nzk3OGExNGMwYzM4MTY3NDVjYmI2Njc4NDU4MEd53Po=: 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.135 00:11:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 nvme0n1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.068 00:11:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 nvme0n1 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTg3ZjEyM2U5NWRkMjVjNzQ3NGJjNGI4YTk5MmQyYmY39jY+: 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjJhNmQxODJjZDExYjM3Nzc2MTlmYzUzMDcyZjI2YmGuUwcu: 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 00:11:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.376 nvme0n1 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2Y3OTM5MDZjNDY1NTY3MjkzOWU3OWE1NzI0ZjE0ZGIwNGUyOTlkODY3ZDBlZTk43BnWgw==: 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM3NWNiNmY3YjNkY2Q5MDc4N2RlNDE1OWIzODY4MmZLU0XZ: 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.376 00:11:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.309 nvme0n1 00:24:43.309 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.309 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.309 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.309 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTk1Mjc2ZGQyZGNhOWMxMmQ0YzBlNTM2MzFhMGYzOThhZDNhMGEwZjIzZDcxZDcyNjgzZmQxNjI2NGE3ODIxYRRxcp8=: 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.310 00:11:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.684 nvme0n1 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJiNDdkN2EwZTdjOTc2MTZkODRlYzNkYThlNzAyNTNjNWYzYjAwMDNhNTRhNDQ0KIQlCA==: 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE0ZTQ5NGYxYzQwZmUyNDQxYjFhYjVjYzgzMDI0M2VmNmNlMDk2ZjI5YWI5N2RiXGJToQ==: 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.684 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 request: 00:24:44.685 { 00:24:44.685 "name": "nvme0", 00:24:44.685 "trtype": "rdma", 00:24:44.685 "traddr": "192.168.100.8", 00:24:44.685 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.685 "adrfam": "ipv4", 00:24:44.685 "trsvcid": "4420", 00:24:44.685 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.685 "method": "bdev_nvme_attach_controller", 00:24:44.685 "req_id": 1 00:24:44.685 } 00:24:44.685 Got JSON-RPC error response 00:24:44.685 response: 00:24:44.685 { 00:24:44.685 "code": -32602, 00:24:44.685 "message": "Invalid parameters" 00:24:44.685 } 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 request: 00:24:44.685 { 00:24:44.685 "name": "nvme0", 00:24:44.685 "trtype": "rdma", 00:24:44.685 "traddr": "192.168.100.8", 00:24:44.685 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.685 "adrfam": "ipv4", 00:24:44.685 "trsvcid": "4420", 00:24:44.685 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.685 "dhchap_key": "key2", 00:24:44.685 "method": "bdev_nvme_attach_controller", 00:24:44.685 "req_id": 1 00:24:44.685 } 00:24:44.685 Got JSON-RPC error response 00:24:44.685 response: 00:24:44.685 { 00:24:44.685 "code": -32602, 00:24:44.685 "message": "Invalid parameters" 00:24:44.685 } 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.685 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 request: 00:24:44.685 { 00:24:44.685 "name": "nvme0", 00:24:44.685 "trtype": "rdma", 00:24:44.685 "traddr": "192.168.100.8", 00:24:44.685 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.685 "adrfam": "ipv4", 00:24:44.685 "trsvcid": "4420", 00:24:44.685 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.685 "dhchap_key": "key1", 00:24:44.685 "dhchap_ctrlr_key": "ckey2", 00:24:44.685 "method": "bdev_nvme_attach_controller", 00:24:44.685 "req_id": 1 00:24:44.685 } 00:24:44.685 Got JSON-RPC error response 00:24:44.685 response: 00:24:44.685 { 00:24:44.685 "code": -32602, 00:24:44.686 "message": "Invalid parameters" 00:24:44.686 } 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:44.686 rmmod nvme_rdma 00:24:44.686 rmmod nvme_fabrics 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 625067 ']' 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 625067 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 625067 ']' 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 625067 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:44.686 00:11:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 625067 00:24:44.686 00:11:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:44.686 00:11:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:44.686 00:11:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 625067' 00:24:44.686 killing process with pid 625067 00:24:44.686 00:11:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 625067 00:24:44.686 00:11:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 625067 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:24:45.253 00:11:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:46.681 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:46.681 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:46.681 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:47.618 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:47.618 00:11:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7sX /tmp/spdk.key-null.gzv /tmp/spdk.key-sha256.EVp /tmp/spdk.key-sha384.Sui /tmp/spdk.key-sha512.ZhP /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:24:47.618 00:11:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:48.991 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:48.991 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:48.991 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:48.991 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:48.991 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:48.991 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:48.991 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:48.991 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:48.991 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:48.991 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:48.991 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:48.991 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:48.991 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:48.991 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:48.991 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:48.991 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:48.991 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:49.250 00:24:49.250 real 0m50.855s 00:24:49.250 user 0m44.509s 00:24:49.250 sys 0m6.673s 00:24:49.250 00:11:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:49.250 00:11:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 ************************************ 00:24:49.250 END TEST nvmf_auth_host 00:24:49.250 ************************************ 00:24:49.250 00:11:18 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:24:49.250 00:11:18 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:24:49.250 00:11:18 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:24:49.250 00:11:18 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:24:49.250 00:11:18 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:49.250 00:11:18 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:49.250 00:11:18 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:49.250 00:11:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 ************************************ 00:24:49.250 START TEST nvmf_bdevperf 00:24:49.250 ************************************ 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:49.250 * Looking for test storage... 00:24:49.250 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.250 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.251 00:11:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.251 00:11:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:24:51.781 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:24:51.781 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.781 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:24:51.782 Found net devices under 0000:09:00.0: mlx_0_0 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:24:51.782 Found net devices under 0000:09:00.1: mlx_0_1 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.782 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:52.040 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:52.040 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:24:52.040 altname enp9s0f0np0 00:24:52.040 inet 192.168.100.8/24 scope global mlx_0_0 00:24:52.040 valid_lft forever preferred_lft forever 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:52.040 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:52.040 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:24:52.040 altname enp9s0f1np1 00:24:52.040 inet 192.168.100.9/24 scope global mlx_0_1 00:24:52.040 valid_lft forever preferred_lft forever 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:52.040 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:52.041 192.168.100.9' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:52.041 192.168.100.9' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:52.041 192.168.100.9' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=635200 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 635200 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 635200 ']' 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:52.041 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.041 [2024-05-15 00:11:21.242594] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:24:52.041 [2024-05-15 00:11:21.242680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.041 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.041 [2024-05-15 00:11:21.311497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.299 [2024-05-15 00:11:21.423091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.299 [2024-05-15 00:11:21.423141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.299 [2024-05-15 00:11:21.423170] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.299 [2024-05-15 00:11:21.423182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.299 [2024-05-15 00:11:21.423197] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.299 [2024-05-15 00:11:21.423326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.299 [2024-05-15 00:11:21.423389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.299 [2024-05-15 00:11:21.423392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.299 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.299 [2024-05-15 00:11:21.575475] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1556160/0x155a650) succeed. 00:24:52.299 [2024-05-15 00:11:21.585781] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1557700/0x159bce0) succeed. 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.557 Malloc0 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.557 [2024-05-15 00:11:21.752181] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:52.557 [2024-05-15 00:11:21.752512] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.557 { 00:24:52.557 "params": { 00:24:52.557 "name": "Nvme$subsystem", 00:24:52.557 "trtype": "$TEST_TRANSPORT", 00:24:52.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.557 "adrfam": "ipv4", 00:24:52.557 "trsvcid": "$NVMF_PORT", 00:24:52.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.557 "hdgst": ${hdgst:-false}, 00:24:52.557 "ddgst": ${ddgst:-false} 00:24:52.557 }, 00:24:52.557 "method": "bdev_nvme_attach_controller" 00:24:52.557 } 00:24:52.557 EOF 00:24:52.557 )") 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:52.557 00:11:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.557 "params": { 00:24:52.557 "name": "Nvme1", 00:24:52.557 "trtype": "rdma", 00:24:52.557 "traddr": "192.168.100.8", 00:24:52.557 "adrfam": "ipv4", 00:24:52.557 "trsvcid": "4420", 00:24:52.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.557 "hdgst": false, 00:24:52.557 "ddgst": false 00:24:52.557 }, 00:24:52.557 "method": "bdev_nvme_attach_controller" 00:24:52.557 }' 00:24:52.557 [2024-05-15 00:11:21.796784] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:24:52.557 [2024-05-15 00:11:21.796864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635349 ] 00:24:52.557 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.557 [2024-05-15 00:11:21.867090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.815 [2024-05-15 00:11:21.980960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.072 Running I/O for 1 seconds... 00:24:54.005 00:24:54.005 Latency(us) 00:24:54.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.005 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.005 Verification LBA range: start 0x0 length 0x4000 00:24:54.005 Nvme1n1 : 1.01 14049.43 54.88 0.00 0.00 9046.59 2366.58 21165.70 00:24:54.005 =================================================================================================================== 00:24:54.005 Total : 14049.43 54.88 0.00 0.00 9046.59 2366.58 21165.70 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=635485 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:54.269 { 00:24:54.269 "params": { 00:24:54.269 "name": "Nvme$subsystem", 00:24:54.269 "trtype": "$TEST_TRANSPORT", 00:24:54.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.269 "adrfam": "ipv4", 00:24:54.269 "trsvcid": "$NVMF_PORT", 00:24:54.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.269 "hdgst": ${hdgst:-false}, 00:24:54.269 "ddgst": ${ddgst:-false} 00:24:54.269 }, 00:24:54.269 "method": "bdev_nvme_attach_controller" 00:24:54.269 } 00:24:54.269 EOF 00:24:54.269 )") 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:54.269 00:11:23 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:54.269 "params": { 00:24:54.269 "name": "Nvme1", 00:24:54.269 "trtype": "rdma", 00:24:54.269 "traddr": "192.168.100.8", 00:24:54.269 "adrfam": "ipv4", 00:24:54.269 "trsvcid": "4420", 00:24:54.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.269 "hdgst": false, 00:24:54.269 "ddgst": false 00:24:54.269 }, 00:24:54.269 "method": "bdev_nvme_attach_controller" 00:24:54.270 }' 00:24:54.270 [2024-05-15 00:11:23.510001] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:24:54.270 [2024-05-15 00:11:23.510083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635485 ] 00:24:54.270 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.270 [2024-05-15 00:11:23.581375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.531 [2024-05-15 00:11:23.690655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.789 Running I/O for 15 seconds... 00:24:57.315 00:11:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 635200 00:24:57.315 00:11:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:58.251 [2024-05-15 00:11:27.502502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.502968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.502984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.251 [2024-05-15 00:11:27.503620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.251 [2024-05-15 00:11:27.503634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.503990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.252 [2024-05-15 00:11:27.504969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.252 [2024-05-15 00:11:27.504986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.505971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.253 [2024-05-15 00:11:27.506299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.253 [2024-05-15 00:11:27.506314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.254 [2024-05-15 00:11:27.506642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187400 00:24:58.254 [2024-05-15 00:11:27.506675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187400 00:24:58.254 [2024-05-15 00:11:27.506708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.506726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187400 00:24:58.254 [2024-05-15 00:11:27.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:7290 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.508289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:58.254 [2024-05-15 00:11:27.508314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:58.254 [2024-05-15 00:11:27.508328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32792 len:8 PRP1 0x0 PRP2 0x0 00:24:58.254 [2024-05-15 00:11:27.508343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.254 [2024-05-15 00:11:27.508399] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:58.254 [2024-05-15 00:11:27.513124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.254 [2024-05-15 00:11:27.531603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:58.254 [2024-05-15 00:11:27.534811] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:58.254 [2024-05-15 00:11:27.534845] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:58.254 [2024-05-15 00:11:27.534868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:59.625 [2024-05-15 00:11:28.538865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:59.625 [2024-05-15 00:11:28.538897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.625 [2024-05-15 00:11:28.539144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.625 [2024-05-15 00:11:28.539182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.625 [2024-05-15 00:11:28.539199] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:59.625 [2024-05-15 00:11:28.542832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.625 [2024-05-15 00:11:28.549487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.625 [2024-05-15 00:11:28.552444] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:59.625 [2024-05-15 00:11:28.552474] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:59.625 [2024-05-15 00:11:28.552488] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:00.188 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 635200 Killed "${NVMF_APP[@]}" "$@" 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=636268 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 636268 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 636268 ']' 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.188 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.188 [2024-05-15 00:11:29.522590] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:25:00.188 [2024-05-15 00:11:29.522686] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.446 [2024-05-15 00:11:29.556655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:00.446 [2024-05-15 00:11:29.556699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.446 [2024-05-15 00:11:29.556979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.446 [2024-05-15 00:11:29.557001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.446 [2024-05-15 00:11:29.557015] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:00.446 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.446 [2024-05-15 00:11:29.560226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.446 [2024-05-15 00:11:29.567195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.446 [2024-05-15 00:11:29.570053] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:00.446 [2024-05-15 00:11:29.570088] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:00.446 [2024-05-15 00:11:29.570102] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:00.446 [2024-05-15 00:11:29.600762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.446 [2024-05-15 00:11:29.713031] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.447 [2024-05-15 00:11:29.713093] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.447 [2024-05-15 00:11:29.713123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.447 [2024-05-15 00:11:29.713135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.447 [2024-05-15 00:11:29.713145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.447 [2024-05-15 00:11:29.713201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.447 [2024-05-15 00:11:29.713267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.447 [2024-05-15 00:11:29.713270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.704 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:00.704 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:25:00.704 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.705 00:11:29 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.705 [2024-05-15 00:11:29.890789] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fad160/0x1fb1650) succeed. 00:25:00.705 [2024-05-15 00:11:29.901221] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fae700/0x1ff2ce0) succeed. 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.705 Malloc0 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.705 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.968 [2024-05-15 00:11:30.069309] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:00.968 [2024-05-15 00:11:30.069634] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.968 00:11:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 635485 00:25:01.228 [2024-05-15 00:11:30.574084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:01.228 [2024-05-15 00:11:30.574121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.228 [2024-05-15 00:11:30.574345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.228 [2024-05-15 00:11:30.574367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.228 [2024-05-15 00:11:30.574382] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:01.485 [2024-05-15 00:11:30.577803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.485 [2024-05-15 00:11:30.583127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.485 [2024-05-15 00:11:30.626184] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:09.631 00:25:09.631 Latency(us) 00:25:09.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.631 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.631 Verification LBA range: start 0x0 length 0x4000 00:25:09.631 Nvme1n1 : 15.01 10218.09 39.91 7706.15 0.00 7115.28 688.73 1037701.88 00:25:09.631 =================================================================================================================== 00:25:09.631 Total : 10218.09 39.91 7706.15 0.00 7115.28 688.73 1037701.88 00:25:09.888 00:11:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:09.888 00:11:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.889 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:10.147 rmmod nvme_rdma 00:25:10.147 rmmod nvme_fabrics 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 636268 ']' 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 636268 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 636268 ']' 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 636268 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 636268 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 636268' 00:25:10.147 killing process with pid 636268 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 636268 00:25:10.147 [2024-05-15 00:11:39.313806] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:10.147 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 636268 00:25:10.147 [2024-05-15 00:11:39.388019] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:10.406 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.406 00:11:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:10.406 00:25:10.406 real 0m21.197s 00:25:10.406 user 1m2.780s 00:25:10.406 sys 0m2.979s 00:25:10.406 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.406 00:11:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:10.406 ************************************ 00:25:10.406 END TEST nvmf_bdevperf 00:25:10.406 ************************************ 00:25:10.406 00:11:39 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:10.406 00:11:39 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.406 00:11:39 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.406 00:11:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.406 ************************************ 00:25:10.406 START TEST nvmf_target_disconnect 00:25:10.406 ************************************ 00:25:10.406 00:11:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:10.664 * Looking for test storage... 00:25:10.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.664 00:11:39 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.665 00:11:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:25:13.196 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:25:13.196 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:25:13.196 Found net devices under 0000:09:00.0: mlx_0_0 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:25:13.196 Found net devices under 0000:09:00.1: mlx_0_1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:13.196 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:13.196 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:25:13.196 altname enp9s0f0np0 00:25:13.196 inet 192.168.100.8/24 scope global mlx_0_0 00:25:13.196 valid_lft forever preferred_lft forever 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:13.196 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:13.197 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:13.197 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:25:13.197 altname enp9s0f1np1 00:25:13.197 inet 192.168.100.9/24 scope global mlx_0_1 00:25:13.197 valid_lft forever preferred_lft forever 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:13.197 192.168.100.9' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:13.197 192.168.100.9' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:13.197 192.168.100.9' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.197 ************************************ 00:25:13.197 START TEST nvmf_target_disconnect_tc1 00:25:13.197 ************************************ 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:25:13.197 00:11:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:13.197 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.197 [2024-05-15 00:11:42.335002] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:13.197 [2024-05-15 00:11:42.335061] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:13.197 [2024-05-15 00:11:42.335078] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:25:14.130 [2024-05-15 00:11:43.338963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:14.130 [2024-05-15 00:11:43.339021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:14.130 [2024-05-15 00:11:43.339040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:25:14.130 [2024-05-15 00:11:43.339103] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:14.130 [2024-05-15 00:11:43.339123] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:14.130 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:25:14.130 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:14.130 Initializing NVMe Controllers 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:25:14.130 00:25:14.130 real 0m1.116s 00:25:14.130 user 0m0.889s 00:25:14.130 sys 0m0.217s 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:14.130 ************************************ 00:25:14.130 END TEST nvmf_target_disconnect_tc1 00:25:14.130 ************************************ 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:14.130 ************************************ 00:25:14.130 START TEST nvmf_target_disconnect_tc2 00:25:14.130 ************************************ 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:14.130 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=639580 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 639580 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 639580 ']' 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:14.131 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.131 [2024-05-15 00:11:43.452895] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:25:14.131 [2024-05-15 00:11:43.453007] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.389 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.389 [2024-05-15 00:11:43.525441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.389 [2024-05-15 00:11:43.638270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.389 [2024-05-15 00:11:43.638325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.389 [2024-05-15 00:11:43.638346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.389 [2024-05-15 00:11:43.638363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.389 [2024-05-15 00:11:43.638383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.389 [2024-05-15 00:11:43.638484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:14.389 [2024-05-15 00:11:43.638547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:14.389 [2024-05-15 00:11:43.638621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.389 [2024-05-15 00:11:43.638613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.647 Malloc0 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.647 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.647 [2024-05-15 00:11:43.846806] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff6ac0/0x20027c0) succeed. 00:25:14.647 [2024-05-15 00:11:43.857952] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff8100/0x2082800) succeed. 00:25:14.905 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.905 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.905 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.905 00:11:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.905 [2024-05-15 00:11:44.023356] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:14.905 [2024-05-15 00:11:44.023701] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=639612 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:25:14.905 00:11:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:14.905 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.803 00:11:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 639580 00:25:16.803 00:11:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Read completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 Write completed with error (sct=0, sc=8) 00:25:18.279 starting I/O failed 00:25:18.279 [2024-05-15 00:11:47.207879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:18.843 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 639580 Killed "${NVMF_APP[@]}" "$@" 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=640138 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 640138 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 640138 ']' 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:18.843 00:11:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.844 [2024-05-15 00:11:48.090756] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:25:18.844 [2024-05-15 00:11:48.090838] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.844 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.844 [2024-05-15 00:11:48.163287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Read completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.102 Write completed with error (sct=0, sc=8) 00:25:19.102 starting I/O failed 00:25:19.103 Read completed with error (sct=0, sc=8) 00:25:19.103 starting I/O failed 00:25:19.103 Read completed with error (sct=0, sc=8) 00:25:19.103 starting I/O failed 00:25:19.103 Write completed with error (sct=0, sc=8) 00:25:19.103 starting I/O failed 00:25:19.103 [2024-05-15 00:11:48.212985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.103 [2024-05-15 00:11:48.275001] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.103 [2024-05-15 00:11:48.275056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.103 [2024-05-15 00:11:48.275078] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.103 [2024-05-15 00:11:48.275095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.103 [2024-05-15 00:11:48.275110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.103 [2024-05-15 00:11:48.275205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:19.103 [2024-05-15 00:11:48.275332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:19.103 [2024-05-15 00:11:48.275398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:19.103 [2024-05-15 00:11:48.275407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 Malloc0 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 [2024-05-15 00:11:49.141130] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15afac0/0x15bb7c0) succeed. 00:25:20.035 [2024-05-15 00:11:49.152402] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b1100/0x163b800) succeed. 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Write completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 Read completed with error (sct=0, sc=8) 00:25:20.035 starting I/O failed 00:25:20.035 [2024-05-15 00:11:49.217944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 [2024-05-15 00:11:49.326222] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:20.035 [2024-05-15 00:11:49.326573] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.035 00:11:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 639612 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Write completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 Read completed with error (sct=0, sc=8) 00:25:20.971 starting I/O failed 00:25:20.971 [2024-05-15 00:11:50.222876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.971 [2024-05-15 00:11:50.222909] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:20.971 A controller has encountered a failure and is being reset. 00:25:20.971 [2024-05-15 00:11:50.222987] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:20.971 [2024-05-15 00:11:50.239912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.971 Controller properly reset. 00:25:25.198 Initializing NVMe Controllers 00:25:25.198 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.198 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:25.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:25.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:25.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:25.198 Initialization complete. Launching workers. 00:25:25.198 Starting thread on core 1 00:25:25.198 Starting thread on core 2 00:25:25.198 Starting thread on core 3 00:25:25.198 Starting thread on core 0 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:25:25.198 00:25:25.198 real 0m10.895s 00:25:25.198 user 0m36.824s 00:25:25.198 sys 0m1.788s 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.198 ************************************ 00:25:25.198 END TEST nvmf_target_disconnect_tc2 00:25:25.198 ************************************ 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:25.198 ************************************ 00:25:25.198 START TEST nvmf_target_disconnect_tc3 00:25:25.198 ************************************ 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc3 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # reconnectpid=640838 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:25:25.198 00:11:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@67 -- # sleep 2 00:25:25.198 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.099 00:11:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@68 -- # kill -9 640138 00:25:27.099 00:11:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@70 -- # sleep 2 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Write completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 Read completed with error (sct=0, sc=8) 00:25:28.470 starting I/O failed 00:25:28.470 [2024-05-15 00:11:57.515305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:28.470 [2024-05-15 00:11:57.517430] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:28.470 [2024-05-15 00:11:57.517456] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:28.470 [2024-05-15 00:11:57.517469] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:29.037 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 640138 Killed "${NVMF_APP[@]}" "$@" 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=641364 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 641364 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@827 -- # '[' -z 641364 ']' 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:29.037 00:11:58 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:29.296 [2024-05-15 00:11:58.397297] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:25:29.296 [2024-05-15 00:11:58.397372] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.296 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.296 [2024-05-15 00:11:58.469557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.296 [2024-05-15 00:11:58.521733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:29.296 qpair failed and we were unable to recover it. 00:25:29.296 [2024-05-15 00:11:58.523746] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:29.296 [2024-05-15 00:11:58.523773] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:29.296 [2024-05-15 00:11:58.523787] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:29.296 [2024-05-15 00:11:58.576667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.296 [2024-05-15 00:11:58.576722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.296 [2024-05-15 00:11:58.576741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.296 [2024-05-15 00:11:58.576760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.296 [2024-05-15 00:11:58.576775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.296 [2024-05-15 00:11:58.576873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:29.296 [2024-05-15 00:11:58.576944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:29.296 [2024-05-15 00:11:58.577020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:29.296 [2024-05-15 00:11:58.577029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # return 0 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 Malloc0 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.226 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 [2024-05-15 00:11:59.432410] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2246ac0/0x22527c0) succeed. 00:25:30.226 [2024-05-15 00:11:59.443787] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2248100/0x22d2800) succeed. 00:25:30.226 [2024-05-15 00:11:59.527819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:30.226 qpair failed and we were unable to recover it. 00:25:30.226 [2024-05-15 00:11:59.529678] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:30.226 [2024-05-15 00:11:59.529705] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:30.226 [2024-05-15 00:11:59.529719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.483 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.484 [2024-05-15 00:11:59.610358] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:30.484 [2024-05-15 00:11:59.610676] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.484 00:11:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@73 -- # wait 640838 00:25:31.415 [2024-05-15 00:12:00.533731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.415 qpair failed and we were unable to recover it. 00:25:31.415 [2024-05-15 00:12:00.535474] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:31.415 [2024-05-15 00:12:00.535502] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:31.415 [2024-05-15 00:12:00.535515] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:32.347 [2024-05-15 00:12:01.539245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:32.347 qpair failed and we were unable to recover it. 00:25:32.347 [2024-05-15 00:12:01.540894] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:32.347 [2024-05-15 00:12:01.540927] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:32.347 [2024-05-15 00:12:01.540949] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:33.279 [2024-05-15 00:12:02.544756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:33.279 qpair failed and we were unable to recover it. 00:25:33.279 [2024-05-15 00:12:02.546301] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:33.279 [2024-05-15 00:12:02.546328] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:33.279 [2024-05-15 00:12:02.546340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:34.207 [2024-05-15 00:12:03.550218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:34.207 qpair failed and we were unable to recover it. 00:25:34.207 [2024-05-15 00:12:03.551812] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:34.207 [2024-05-15 00:12:03.551839] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:34.207 [2024-05-15 00:12:03.551852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:35.596 [2024-05-15 00:12:04.555694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-05-15 00:12:04.557269] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:35.596 [2024-05-15 00:12:04.557302] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:35.596 [2024-05-15 00:12:04.557316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:36.527 [2024-05-15 00:12:05.561035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:36.527 qpair failed and we were unable to recover it. 00:25:37.461 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Read completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 Write completed with error (sct=0, sc=8) 00:25:37.462 starting I/O failed 00:25:37.462 [2024-05-15 00:12:06.565804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Write completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 Read completed with error (sct=0, sc=8) 00:25:38.393 starting I/O failed 00:25:38.393 [2024-05-15 00:12:07.570735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:38.393 [2024-05-15 00:12:07.570771] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:38.393 A controller has encountered a failure and is being reset. 00:25:38.393 Resorting to new failover address 192.168.100.9 00:25:38.393 [2024-05-15 00:12:07.570822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.393 [2024-05-15 00:12:07.570856] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:38.393 [2024-05-15 00:12:07.572259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:38.393 Controller properly reset. 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Read completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 Write completed with error (sct=0, sc=8) 00:25:39.324 starting I/O failed 00:25:39.324 [2024-05-15 00:12:08.612269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:39.583 Initializing NVMe Controllers 00:25:39.583 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.583 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:39.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:39.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:39.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:39.583 Initialization complete. Launching workers. 00:25:39.583 Starting thread on core 1 00:25:39.583 Starting thread on core 2 00:25:39.583 Starting thread on core 3 00:25:39.583 Starting thread on core 0 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@74 -- # sync 00:25:39.583 00:25:39.583 real 0m14.345s 00:25:39.583 user 0m48.788s 00:25:39.583 sys 0m4.306s 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.583 ************************************ 00:25:39.583 END TEST nvmf_target_disconnect_tc3 00:25:39.583 ************************************ 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:39.583 rmmod nvme_rdma 00:25:39.583 rmmod nvme_fabrics 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 641364 ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 641364 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 641364 ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 641364 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 641364 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 641364' 00:25:39.583 killing process with pid 641364 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 641364 00:25:39.583 [2024-05-15 00:12:08.783383] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:39.583 00:12:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 641364 00:25:39.583 [2024-05-15 00:12:08.868125] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:39.869 00:12:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.869 00:12:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:39.869 00:25:39.869 real 0m29.397s 00:25:39.869 user 2m8.975s 00:25:39.869 sys 0m8.413s 00:25:39.869 00:12:09 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:39.869 00:12:09 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:39.869 ************************************ 00:25:39.869 END TEST nvmf_target_disconnect 00:25:39.869 ************************************ 00:25:39.869 00:12:09 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:25:39.869 00:12:09 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.869 00:12:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:39.869 00:12:09 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:39.869 00:25:39.869 real 20m2.437s 00:25:39.869 user 58m35.720s 00:25:39.869 sys 2m55.881s 00:25:39.869 00:12:09 nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:39.869 00:12:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:39.869 ************************************ 00:25:39.869 END TEST nvmf_rdma 00:25:39.869 ************************************ 00:25:40.128 00:12:09 -- spdk/autotest.sh@281 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:40.128 00:12:09 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:40.128 00:12:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:40.128 00:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:40.128 ************************************ 00:25:40.128 START TEST spdkcli_nvmf_rdma 00:25:40.128 ************************************ 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:40.128 * Looking for test storage... 00:25:40.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=643295 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 643295 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@827 -- # '[' -z 643295 ']' 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:40.128 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:40.128 [2024-05-15 00:12:09.346034] Starting SPDK v24.05-pre git sha1 2260a96a9 / DPDK 23.11.0 initialization... 00:25:40.128 [2024-05-15 00:12:09.346122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643295 ] 00:25:40.128 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.128 [2024-05-15 00:12:09.422998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:40.386 [2024-05-15 00:12:09.532409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.386 [2024-05-15 00:12:09.532413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # return 0 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.386 00:12:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:25:42.917 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:25:42.917 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:25:42.917 Found net devices under 0000:09:00.0: mlx_0_0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:25:42.917 Found net devices under 0000:09:00.1: mlx_0_1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:42.917 250: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:42.917 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:25:42.917 altname enp9s0f0np0 00:25:42.917 inet 192.168.100.8/24 scope global mlx_0_0 00:25:42.917 valid_lft forever preferred_lft forever 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:42.917 251: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:42.917 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:25:42.917 altname enp9s0f1np1 00:25:42.917 inet 192.168.100.9/24 scope global mlx_0_1 00:25:42.917 valid_lft forever preferred_lft forever 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:42.917 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:42.918 192.168.100.9' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:42.918 192.168.100.9' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:42.918 192.168.100.9' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:42.918 00:12:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:42.918 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:42.918 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:42.918 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:42.918 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:42.918 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:42.918 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:42.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:42.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:42.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:42.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:42.918 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:42.918 ' 00:25:45.448 [2024-05-15 00:12:14.479431] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfe9310/0x1119440) succeed. 00:25:45.448 [2024-05-15 00:12:14.493585] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfea9f0/0xff92c0) succeed. 00:25:46.820 [2024-05-15 00:12:15.804765] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:46.820 [2024-05-15 00:12:15.805216] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:25:49.346 [2024-05-15 00:12:18.080407] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:25:50.718 [2024-05-15 00:12:20.051141] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:52.616 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:52.616 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:52.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:52.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:52.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:52.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:52.616 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:52.616 00:12:21 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:52.874 00:12:22 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:52.874 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:52.874 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:52.874 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:52.874 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:52.874 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:52.874 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:52.874 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:52.874 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:52.874 ' 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:58.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:58.151 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:58.151 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:58.151 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:58.151 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:58.151 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:58.151 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:58.152 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:58.152 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 643295 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@946 -- # '[' -z 643295 ']' 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # kill -0 643295 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # uname 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:58.152 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 643295 00:25:58.410 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:58.410 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:58.410 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 643295' 00:25:58.410 killing process with pid 643295 00:25:58.410 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@965 -- # kill 643295 00:25:58.410 [2024-05-15 00:12:27.501553] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:58.410 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # wait 643295 00:25:58.410 [2024-05-15 00:12:27.562504] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:58.669 rmmod nvme_rdma 00:25:58.669 rmmod nvme_fabrics 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:58.669 00:25:58.669 real 0m18.640s 00:25:58.669 user 0m39.793s 00:25:58.669 sys 0m2.378s 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:58.669 00:12:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:58.669 ************************************ 00:25:58.669 END TEST spdkcli_nvmf_rdma 00:25:58.669 ************************************ 00:25:58.669 00:12:27 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:58.669 00:12:27 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:25:58.669 00:12:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:58.669 00:12:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:58.669 00:12:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:58.669 00:12:27 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:25:58.669 00:12:27 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:25:58.669 00:12:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:58.669 00:12:27 -- common/autotest_common.sh@10 -- # set +x 00:25:58.669 00:12:27 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:25:58.669 00:12:27 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:58.669 00:12:27 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:58.669 00:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:00.576 INFO: APP EXITING 00:26:00.576 INFO: killing all VMs 00:26:00.576 INFO: killing vhost app 00:26:00.576 INFO: EXIT DONE 00:26:01.511 Waiting for block devices as requested 00:26:01.770 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:01.770 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:01.770 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:01.770 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:02.029 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:02.029 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:02.029 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:02.029 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:02.287 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:02.287 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:02.287 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:02.287 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:02.545 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:02.545 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:02.545 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:02.545 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:02.545 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:04.446 Cleaning 00:26:04.446 Removing: /var/run/dpdk/spdk0/config 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:04.446 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:04.446 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:04.446 Removing: /var/run/dpdk/spdk1/config 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:04.446 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:04.446 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:04.446 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:04.446 Removing: /var/run/dpdk/spdk2/config 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:04.446 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:04.446 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:04.446 Removing: /var/run/dpdk/spdk3/config 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:04.446 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:04.446 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:04.446 Removing: /var/run/dpdk/spdk4/config 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:04.446 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:04.446 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:04.446 Removing: /dev/shm/bdevperf_trace.pid483077 00:26:04.446 Removing: /dev/shm/bdevperf_trace.pid586199 00:26:04.446 Removing: /dev/shm/bdev_svc_trace.1 00:26:04.446 Removing: /dev/shm/nvmf_trace.0 00:26:04.446 Removing: /dev/shm/spdk_tgt_trace.pid399961 00:26:04.446 Removing: /var/run/dpdk/spdk0 00:26:04.446 Removing: /var/run/dpdk/spdk1 00:26:04.446 Removing: /var/run/dpdk/spdk2 00:26:04.446 Removing: /var/run/dpdk/spdk3 00:26:04.446 Removing: /var/run/dpdk/spdk4 00:26:04.446 Removing: /var/run/dpdk/spdk_pid398317 00:26:04.446 Removing: /var/run/dpdk/spdk_pid399050 00:26:04.446 Removing: /var/run/dpdk/spdk_pid399961 00:26:04.446 Removing: /var/run/dpdk/spdk_pid400304 00:26:04.446 Removing: /var/run/dpdk/spdk_pid400999 00:26:04.446 Removing: /var/run/dpdk/spdk_pid401262 00:26:04.446 Removing: /var/run/dpdk/spdk_pid401974 00:26:04.446 Removing: /var/run/dpdk/spdk_pid401995 00:26:04.446 Removing: /var/run/dpdk/spdk_pid402237 00:26:04.446 Removing: /var/run/dpdk/spdk_pid405447 00:26:04.446 Removing: /var/run/dpdk/spdk_pid406367 00:26:04.446 Removing: /var/run/dpdk/spdk_pid406679 00:26:04.446 Removing: /var/run/dpdk/spdk_pid406865 00:26:04.446 Removing: /var/run/dpdk/spdk_pid407200 00:26:04.446 Removing: /var/run/dpdk/spdk_pid407388 00:26:04.446 Removing: /var/run/dpdk/spdk_pid407543 00:26:04.446 Removing: /var/run/dpdk/spdk_pid407704 00:26:04.447 Removing: /var/run/dpdk/spdk_pid407973 00:26:04.447 Removing: /var/run/dpdk/spdk_pid408329 00:26:04.447 Removing: /var/run/dpdk/spdk_pid410823 00:26:04.447 Removing: /var/run/dpdk/spdk_pid411585 00:26:04.447 Removing: /var/run/dpdk/spdk_pid411756 00:26:04.447 Removing: /var/run/dpdk/spdk_pid411766 00:26:04.447 Removing: /var/run/dpdk/spdk_pid412192 00:26:04.447 Removing: /var/run/dpdk/spdk_pid412328 00:26:04.447 Removing: /var/run/dpdk/spdk_pid412668 00:26:04.447 Removing: /var/run/dpdk/spdk_pid412775 00:26:04.447 Removing: /var/run/dpdk/spdk_pid413067 00:26:04.447 Removing: /var/run/dpdk/spdk_pid413074 00:26:04.447 Removing: /var/run/dpdk/spdk_pid413360 00:26:04.447 Removing: /var/run/dpdk/spdk_pid413382 00:26:04.447 Removing: /var/run/dpdk/spdk_pid413867 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414026 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414219 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414387 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414534 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414627 00:26:04.447 Removing: /var/run/dpdk/spdk_pid414879 00:26:04.447 Removing: /var/run/dpdk/spdk_pid415040 00:26:04.447 Removing: /var/run/dpdk/spdk_pid415311 00:26:04.447 Removing: /var/run/dpdk/spdk_pid415473 00:26:04.447 Removing: /var/run/dpdk/spdk_pid415631 00:26:04.447 Removing: /var/run/dpdk/spdk_pid415906 00:26:04.447 Removing: /var/run/dpdk/spdk_pid416064 00:26:04.447 Removing: /var/run/dpdk/spdk_pid416226 00:26:04.447 Removing: /var/run/dpdk/spdk_pid416498 00:26:04.447 Removing: /var/run/dpdk/spdk_pid416656 00:26:04.447 Removing: /var/run/dpdk/spdk_pid416877 00:26:04.447 Removing: /var/run/dpdk/spdk_pid417095 00:26:04.447 Removing: /var/run/dpdk/spdk_pid417246 00:26:04.447 Removing: /var/run/dpdk/spdk_pid417525 00:26:04.447 Removing: /var/run/dpdk/spdk_pid417681 00:26:04.447 Removing: /var/run/dpdk/spdk_pid417840 00:26:04.447 Removing: /var/run/dpdk/spdk_pid418114 00:26:04.447 Removing: /var/run/dpdk/spdk_pid418281 00:26:04.447 Removing: /var/run/dpdk/spdk_pid418511 00:26:04.447 Removing: /var/run/dpdk/spdk_pid418714 00:26:04.447 Removing: /var/run/dpdk/spdk_pid418899 00:26:04.447 Removing: /var/run/dpdk/spdk_pid419125 00:26:04.447 Removing: /var/run/dpdk/spdk_pid421737 00:26:04.447 Removing: /var/run/dpdk/spdk_pid456503 00:26:04.447 Removing: /var/run/dpdk/spdk_pid459204 00:26:04.447 Removing: /var/run/dpdk/spdk_pid466461 00:26:04.447 Removing: /var/run/dpdk/spdk_pid469780 00:26:04.447 Removing: /var/run/dpdk/spdk_pid472137 00:26:04.447 Removing: /var/run/dpdk/spdk_pid472676 00:26:04.447 Removing: /var/run/dpdk/spdk_pid483077 00:26:04.447 Removing: /var/run/dpdk/spdk_pid483248 00:26:04.447 Removing: /var/run/dpdk/spdk_pid486290 00:26:04.447 Removing: /var/run/dpdk/spdk_pid490631 00:26:04.447 Removing: /var/run/dpdk/spdk_pid492814 00:26:04.447 Removing: /var/run/dpdk/spdk_pid499644 00:26:04.447 Removing: /var/run/dpdk/spdk_pid516828 00:26:04.447 Removing: /var/run/dpdk/spdk_pid519072 00:26:04.447 Removing: /var/run/dpdk/spdk_pid551667 00:26:04.447 Removing: /var/run/dpdk/spdk_pid562612 00:26:04.447 Removing: /var/run/dpdk/spdk_pid584691 00:26:04.447 Removing: /var/run/dpdk/spdk_pid585376 00:26:04.447 Removing: /var/run/dpdk/spdk_pid586199 00:26:04.447 Removing: /var/run/dpdk/spdk_pid588819 00:26:04.447 Removing: /var/run/dpdk/spdk_pid593286 00:26:04.447 Removing: /var/run/dpdk/spdk_pid593947 00:26:04.447 Removing: /var/run/dpdk/spdk_pid594590 00:26:04.447 Removing: /var/run/dpdk/spdk_pid595265 00:26:04.447 Removing: /var/run/dpdk/spdk_pid595533 00:26:04.447 Removing: /var/run/dpdk/spdk_pid598298 00:26:04.447 Removing: /var/run/dpdk/spdk_pid598306 00:26:04.447 Removing: /var/run/dpdk/spdk_pid601226 00:26:04.447 Removing: /var/run/dpdk/spdk_pid601621 00:26:04.447 Removing: /var/run/dpdk/spdk_pid602014 00:26:04.447 Removing: /var/run/dpdk/spdk_pid602543 00:26:04.447 Removing: /var/run/dpdk/spdk_pid602548 00:26:04.447 Removing: /var/run/dpdk/spdk_pid605469 00:26:04.447 Removing: /var/run/dpdk/spdk_pid605901 00:26:04.447 Removing: /var/run/dpdk/spdk_pid609091 00:26:04.447 Removing: /var/run/dpdk/spdk_pid611066 00:26:04.447 Removing: /var/run/dpdk/spdk_pid614832 00:26:04.447 Removing: /var/run/dpdk/spdk_pid622198 00:26:04.447 Removing: /var/run/dpdk/spdk_pid622207 00:26:04.447 Removing: /var/run/dpdk/spdk_pid635349 00:26:04.447 Removing: /var/run/dpdk/spdk_pid635485 00:26:04.447 Removing: /var/run/dpdk/spdk_pid639429 00:26:04.447 Removing: /var/run/dpdk/spdk_pid639612 00:26:04.447 Removing: /var/run/dpdk/spdk_pid640838 00:26:04.447 Removing: /var/run/dpdk/spdk_pid643295 00:26:04.447 Clean 00:26:04.706 00:12:33 -- common/autotest_common.sh@1447 -- # return 0 00:26:04.706 00:12:33 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:26:04.706 00:12:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.706 00:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:04.706 00:12:33 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:26:04.706 00:12:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.706 00:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:04.706 00:12:33 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:26:04.706 00:12:33 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:26:04.706 00:12:33 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:26:04.706 00:12:33 -- spdk/autotest.sh@387 -- # hash lcov 00:26:04.706 00:12:33 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:04.706 00:12:33 -- spdk/autotest.sh@389 -- # hostname 00:26:04.706 00:12:33 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:26:04.963 geninfo: WARNING: invalid characters removed from testname! 00:26:31.491 00:12:59 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:34.774 00:13:03 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:37.301 00:13:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:39.862 00:13:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:43.144 00:13:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:45.669 00:13:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:48.196 00:13:17 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:48.196 00:13:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:48.196 00:13:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:48.196 00:13:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.196 00:13:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.196 00:13:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.196 00:13:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.196 00:13:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.196 00:13:17 -- paths/export.sh@5 -- $ export PATH 00:26:48.196 00:13:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.196 00:13:17 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:26:48.196 00:13:17 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:48.196 00:13:17 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715724797.XXXXXX 00:26:48.196 00:13:17 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715724797.yP0pKa 00:26:48.196 00:13:17 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:48.196 00:13:17 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:26:48.196 00:13:17 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:26:48.196 00:13:17 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:26:48.196 00:13:17 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:26:48.196 00:13:17 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:48.196 00:13:17 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:26:48.196 00:13:17 -- common/autotest_common.sh@10 -- $ set +x 00:26:48.196 00:13:17 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:26:48.196 00:13:17 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:48.196 00:13:17 -- pm/common@17 -- $ local monitor 00:26:48.196 00:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:48.196 00:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:48.196 00:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:48.196 00:13:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:48.196 00:13:17 -- pm/common@21 -- $ date +%s 00:26:48.196 00:13:17 -- pm/common@25 -- $ sleep 1 00:26:48.196 00:13:17 -- pm/common@21 -- $ date +%s 00:26:48.196 00:13:17 -- pm/common@21 -- $ date +%s 00:26:48.196 00:13:17 -- pm/common@21 -- $ date +%s 00:26:48.196 00:13:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724797 00:26:48.196 00:13:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724797 00:26:48.196 00:13:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724797 00:26:48.196 00:13:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724797 00:26:48.196 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724797_collect-vmstat.pm.log 00:26:48.196 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724797_collect-cpu-load.pm.log 00:26:48.196 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724797_collect-cpu-temp.pm.log 00:26:48.196 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724797_collect-bmc-pm.bmc.pm.log 00:26:49.570 00:13:18 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:49.570 00:13:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:26:49.570 00:13:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:49.570 00:13:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:49.570 00:13:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:49.570 00:13:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:49.570 00:13:18 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:49.570 00:13:18 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:49.570 00:13:18 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:26:49.570 00:13:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:49.570 00:13:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:49.570 00:13:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:49.570 00:13:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:49.570 00:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:49.570 00:13:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:26:49.570 00:13:18 -- pm/common@44 -- $ pid=656142 00:26:49.570 00:13:18 -- pm/common@50 -- $ kill -TERM 656142 00:26:49.570 00:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:49.570 00:13:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:26:49.570 00:13:18 -- pm/common@44 -- $ pid=656144 00:26:49.570 00:13:18 -- pm/common@50 -- $ kill -TERM 656144 00:26:49.570 00:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:49.570 00:13:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:26:49.570 00:13:18 -- pm/common@44 -- $ pid=656146 00:26:49.570 00:13:18 -- pm/common@50 -- $ kill -TERM 656146 00:26:49.570 00:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:49.570 00:13:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:26:49.570 00:13:18 -- pm/common@44 -- $ pid=656178 00:26:49.570 00:13:18 -- pm/common@50 -- $ sudo -E kill -TERM 656178 00:26:49.570 + [[ -n 313160 ]] 00:26:49.570 + sudo kill 313160 00:26:49.579 [Pipeline] } 00:26:49.598 [Pipeline] // stage 00:26:49.603 [Pipeline] } 00:26:49.621 [Pipeline] // timeout 00:26:49.625 [Pipeline] } 00:26:49.641 [Pipeline] // catchError 00:26:49.674 [Pipeline] } 00:26:49.693 [Pipeline] // wrap 00:26:49.699 [Pipeline] } 00:26:49.716 [Pipeline] // catchError 00:26:49.725 [Pipeline] stage 00:26:49.728 [Pipeline] { (Epilogue) 00:26:49.745 [Pipeline] catchError 00:26:49.747 [Pipeline] { 00:26:49.763 [Pipeline] echo 00:26:49.764 Cleanup processes 00:26:49.769 [Pipeline] sh 00:26:50.048 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:50.048 656277 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:26:50.048 656408 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:50.061 [Pipeline] sh 00:26:50.340 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:50.340 ++ grep -v 'sudo pgrep' 00:26:50.340 ++ awk '{print $1}' 00:26:50.340 + sudo kill -9 656277 00:26:50.353 [Pipeline] sh 00:26:50.633 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:00.637 [Pipeline] sh 00:27:00.919 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:00.919 Artifacts sizes are good 00:27:00.932 [Pipeline] archiveArtifacts 00:27:00.938 Archiving artifacts 00:27:01.092 [Pipeline] sh 00:27:01.367 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:27:01.381 [Pipeline] cleanWs 00:27:01.390 [WS-CLEANUP] Deleting project workspace... 00:27:01.391 [WS-CLEANUP] Deferred wipeout is used... 00:27:01.398 [WS-CLEANUP] done 00:27:01.400 [Pipeline] } 00:27:01.419 [Pipeline] // catchError 00:27:01.430 [Pipeline] sh 00:27:01.708 + logger -p user.info -t JENKINS-CI 00:27:01.715 [Pipeline] } 00:27:01.730 [Pipeline] // stage 00:27:01.736 [Pipeline] } 00:27:01.750 [Pipeline] // node 00:27:01.754 [Pipeline] End of Pipeline 00:27:01.785 Finished: SUCCESS